Pages

Friday, December 28, 2012

Cisco AP Configuration


As we use WLAN as solution . Many times we will end up with configuring diffrent kind of vendors APs.
In this part i will try to cover some of the very basic info for Cisco AP configiguration. 
Cisco Configuraion guide will cover all aspects but it is very huge. So covering here some of important steps only.

1. Connecting the AP

Cisco AP we can connect in primarly through Serial, Telnet , http and https.

For connecting to serial baud rate we need to use 9600 and all others are default
default user/password :Cisco/Cisco

2. Assigning the host name


ap>enable
Password:xxxxxxx
ap#config terminal
Enter configuration commands, one per line. End with CTRL-Z.
ap(config)#hostname my_ap
my_ap(config)#end
my_ap#


3.Assigning IP address

 For assigning IP address first check IP address

AP# show ip interface brief

Interface     IP-Address     OK?  Method  Status                  Protocol
BVI           10.108.00.5    YES  manual   up                      up      
dot11radio0   unassigned     YES  unset   administratively down   down    
dot11radio1   unassigned     YES  unset   administratively down   down    
    

Once we know the current ip address then we can assign IP address

configure terminal
interface bvi1
ip address <IP xxx.xxx.xxx.xxx>  <mask xxx.xxx.xxx.xxx>
end

4. Creating open Profile 

conf t
dot11 ssid <ssid name>
authentication open 
guest-mode
inter d0/d1
ssid <ssid name>
no shut
end


5. configuring the radius server on AP along with secure profile

config terminal
aaa new-model
radius-server host <Radius_server_IP> auth-port 1812 acct-port 1813 key <secret key in the radius server>
aaa group server radius  rad_eap
server <Radius_server_IP> auth-port 1812 acct-port 1813
aaa authentication login eap_methods group rad_eap

dot11 ssid <ssid name>
authentication open eap eap_methods
authentication network-eap eap_methods
authentication key-management wpa
inter d0/d1
encryption mode ciphers <aes/tkip>
ssid <ssid name>
no shut
end


5. Creating local DHCP server on AP 

configure terminal
ip dhcp excluded-address low_address [ high_address]
ip dhcp <pool pool_name>
network subnet_number [ mask | prefix-length ]
lease { days [ hours ] [ minutes ] |infinite }
end

example:

AP# configure terminal
AP(config)# ip dhcp excluded-address 172.16.1.1 172.16.1.20
AP(config)# ip dhcp pool test
AP(dhcp-config)# network 172.16.1.0 255.255.255.0
AP(dhcp-config)# lease 10
AP(dhcp-config)# default-router 172.16.1.1
AP(dhcp-config)# end


6. Creating local radius server

configure terminal
aaa new-model
radius-server local
nas <ip-address_of_AP> key <shared-key>
user <username>  password <password>
end

example:

AP# configure terminal
AP(config)# radius-server local
AP(config-radsrv)# nas 10.91.6.159 key 110337
AP(config-radsrv)# nas 10.91.6.162 key 110337
AP(config-radsrv)# nas 10.91.6.181 key 110337
AP(config-radsrv)# user jsmith password twain74 


7. Setting the clock on the AP

AP# clock set 13:32:00 23 July 2001

8.Configuring the data rates on the AP.

configure terminal
interface dot11radio { 0 | 1 }
speed
802.11b, 2.4-GHz radio:
{[1.0] [11.0] [2.0] [5.5] [basic-1.0] [basic-11.0] [basic-2.0] [basic-5.5] | range | throughput}
802.11g, 2.4-GHz radio:
{[1.0] [2.0] [5.5] [6.0] [9.0] [11.0] [12.0] [18.0] [24.0] [36.0] [48.0] [54.0] [basic-1.0] [basic-2.0] [basic-5.5] [basic-6.0] [basic-9.0] [basic-11.0] [basic-12.0] [basic-18.0] [basic-24.0] [basic-36.0] [basic-48.0] [basic-54.0] | range |
throughput [ofdm] | default }
802.11a 5-GHz radio:
{[6.0] [9.0] [12.0] [18.0] [24.0] [36.0] [48.0] [54.0] [basic-6.0] [basic-9.0] [basic-12.0] [basic-18.0] [basic-24.0] [basic-36.0] [basic-48.0] [basic-54.0] |
range | throughput | default }
802.11n 2.4-GHz radio:
{[1.0] [11.0] [12.0] [18.0] [2.0] [24.0] [36.0] [48.0] [5.5] [54.0] [6.0] [9.0] [basic-1.0] [basic-11.0] [basic-12.0] [basic-18.0] [basic-24.0] [basic-36.0] [basic-48.0] [basic-5.5] [basic-54.0] [basic-6.0] [basic-9.0] [default] [m0-7] [m0.] [m1.] [m10.] [m11.] [m12.] [m13.] [m14.] [m15.] [m2.] [m3.] [m4.] [m5.] [m6.] [m7.] [m8-15] [m8.] [m9.] [ofdm] [only-ofdm] | range | throughput }
802.11n 5-GHz radio:
{[12.0] [18.0] [24.0] [36.0] [48.0] [54.0] [6.0] [9.0] [basic-12.0] [basic-18.0] [basic-24.0] [basic-36.0] [basic-48.0] [basic-54.0] [basic-6.0] [basic-9.0] [default] [m0-7] [m0.] [m1.] [m10.] [m11.] [m12.] [m13.] [m14.] [m15.] [m2.] [m3.] [m4.] [m5.] [m6.] [m7.] [m8-15] [m8.] [m9.] | range | throughput }
end

ex: 
conf t
int d0
speed basic-1.0 54.0
end

9.Configuring the  Channel on the AP.

configure terminal
interface dot11radio { 0 | 1 }
channel
{frequency | least-congested | width [20 | 40-above | 40-below] | dfs }
end

example:

conf t
int d0
channel 36
end


10.Configuring the Power of radio  on the AP.

configure terminal
interface dot11radio { 0 | 1 }
power local
These options are available for the 802.11b, 2.4-GHz radio (in mW):
{ 1 | 5 | 20 | 30 | 50 | 100 | maximum }
These options are available for the 5-GHz radio (in mW):
{ 5 | 10 | 20 | 40 | maximum }
These options are available for the 802.11a, 5-GHz radio (in dBm):
{-1 | 2 | 5 | 8 | 11 | 14 | 15 | 17 | maximum }
These options are available for the AIR-RM21A 5-GHz radio (in dBm):
{ -1 | 2 | 5 | 8 | 11 | 14 | 16 | 17 | 20 | maximum }
These options are available for the 2.4-GHz 802.11n radio (in dBM):
{ -1 | 2 | 5 | 8 | 11 | 14 | 17 | 20| 23 | maximum }
end

example:

conf t
int d0
power local 17
end

11.Configuring the Power of radio  on the AP.

configure terminal
interface dot11radio { 0 | 1 }
world-mode
dot11d country_code code
{ both | indoor | outdoor } world-mode roaming | legacy
end

example:

conf t
int d0
world-mode dot11d country_code US both
end

12.Disabling Short Radio Preamble

configure terminal
interface dot11radio 0
no preamble-short
end

Enabling Short Radio Preamble

configure terminal
interface dot11radio 0
no preamble-short
end

13.selecting the Transmit and Receive Antennas

configure terminal
interface dot11radio { 0 | 1 }
antenna receive
{diversity | left | middle | right}
antenna transmit
{diversity | left | right}
end

example:
configure terminal
interface dot11radio  0
antenna receive left
antenna transmit left
end

14. enabling and disabling the radios

configure terminal
interface dot11radio  0/1
shut
no shut
end

Saturday, October 6, 2012

How Roam , PMK caching, OKC and Pre-auth works


In this topic we can cover below points

1. What is Wi-Fi roaming and why it requires ?
2. Different infrastructures where roam can happen.
3. Different ways of handling roaming

1. What is Wi-Fi roaming and why it requires ?


As everybody is using the mobile, roaming will happen seamlessly between cell towers when we are moving 
on different ways like cars, trains and buses. So that our call won't cut in between. Similar be the case
with Laptops and Smart phones where we connected to the network through Wi-Fi. We may be downloading some movie
or game or talking on Skype through Wi-Fi .We  need fast transition to move from one AP to another without users
knowledge. 

Days are coming soon where everybody will use Skype with video call by using Wi-fi. Already some of countries implemented Wi-Fi for entire city there we can use Wi-Fi like our Cell towers. 

2. Different infrastructures where roam can happen.


Roaming will happen whenever we roam from coverage area of one AP to coverage area of another AP in the ESS. As we know BSS is the coverage area of single AP like below picture.













Fig :1


ESS is the coverage area of Two or more APs which have same SSID so that clients can able to roam between those APs without disconnecting the network like below pic.

















      Fig : 2

So from the above discussion we understand that roaming will happen whenever we have ESS. The ESS roaming can happened in different ways like below.

a. Roaming between two Independent APs( Autonomous APs like above Fig :2)

b. Roaming between two APs under the controller (Thin APs)























c. Roaming between two APs which under two different controllers.




















3. Different ways of handling roaming

Usually if we use open authentication without any security there is no much delay in connecting.
But in practical we will use different authentication methods to protect the our network.So it will take some time to complete the authentication which will cause some delay in re-connecting. So we are using diffrent Technics to overcome those. whenever we roam our client from one AP to another AP re-Association will happen. 

Re-association can happen in 4 different ways

a. Full dot1x authentication with new AP
b. PMK caching
c. Pre-authentication
d. Opportunistic Key caching (OKC)

a. Full dot1x authentication with new AP


Whenever we roam from one AP to another new AP first time it will do the complete 802.1x process like below.




But time critical applications  like Voice and Video make disturb as dot1x process considerable amount of time while re-connecting the network.


b. PMK caching


  • Usually whenever we connect any AP with any dot1x method or PSK we will derive the PMK and followed by PMKSA.
  • In PMK caching whenever we connect to any AP we save the PMKSA (PMKID is part of PMKSA) as per life time.

Later point of time if we are trying to connect to the same AP(BSSID) we will check whether PMKSA of that AP is available in the client cache . 



  • If it is available we send that PMKSA in the re-association request.
  • Then AP will check PMK cache of AP ,if it is avilable then without going to the dot1x process again it will go the direct first step of 4-way handshake. 
  • So that considerable amount of time will be saved in re-connecting the AP.

c. Pre-authentication
















  • In Pre-authentication Client will Authenticate to the other APs which are in the ESS even client is not assosiated with those APs and Client even may be in the APS coverage area. 
  • So that whenever it went to that APs coverage area client can skip the dot1x process and continue the 4-way handshake process. 

  • In pre-authentication client will authenticate other Aps through the AP which is currently connected. whenever client send EAPOl request current AP will forward the request to the targeted AP through distribution system.
  • For identifying these frames client will send in ETHER TYPE 88-C7 instead of 88-8E. For pre-authentication to happen both client and AP have to support pre-authentication. That we can see in the beacon frame of the AP.



d. Opportunistic Key caching (OKC)


  • Opportunistic Key caching (OKC) is supported by only few vendors  like Aruba and Motorola.
  • Opportunistic Key caching (OKC) will happen with controller based infrastructure rather than autonomous APs.
  •  controller based infrastructure will work in split-MAC architecture where some of part of operations handled at  AP and some Part of operations handled at controller.
  •  In this whenever client completes dot1x process with AP1 of the controller both client and AP1 have pmkid1 . 
  •  So this pmkid1 will be forwaded to the controller .
  •  Controller will forward the pmkid1 to the other APs in  the network under that controller.
  •  For deriving the PMMID2 with second AP AP2 client will use the formula for calculating the PMKID.
  •  PMKID=HMAC-SHA1-128(PMK,"PMK name"||AA||SPA).

  •  So whenever it is roaming to the second AP it already have PMKID2 for the second AP. 
  • As second AP already have  PMKID2 through controller by using same formula. It will check the client PMKID2 with its PMKID2. 
  • If it is  matches it will skip dot1x process and go the first step of the 4-way handshake process.


Saturday, August 18, 2012

How WLAN CSMA/CA Works


In Wireless LANs CSMA/CA is core concept in comunicating wirelessly. In any shared medium accessing the medium without collision is important part. Its like not talking all at a time, so that remaining people should understood the other's talk.

                       In Ethernet it is achieved through CSMA/CD as it is using full duplex communication. But in wireless it is using Halfduplex communication. The half-duplex constraint applies to devices that share a relative physical area and RF frequency.Several mechanisms are used as a part of 802.11 channel access to minimize the likelihood of frame collisions by multiple STAs attempting to access the transmission medium simultaneously.


Collisions often occur, but the processes used by the 802.11 protocol are in place to minimize the likelihood of collisions and define the appropriate response in the event that a collision is inferred.

Two most common coordination access methods used in WLANs

  1.Distributed Coordination Function (DCF)
  2.Enhanced Distributed Coordination Access (EDCA is part of HCF).

1. Distributed Coordination Function (DCF)


  • Using the foundational DCF coordination function logic is active in every station (STA) in a basic service set (BSS) whenever the network is in operation.i.e. each station within a DCF follows the same channel access rules.
  • This method is contention-based, which means that each device “competes” with one another to gain access to the wireless medium.
  • After a transmission opportunity is obtained and observed, the contention process begins again.
  • As the original 802.11 network access method, DCF is the most simple channel access method but it lacks support for quality of service (QoS).
  • In order to maintain support for non-QoS devices in QoS-enabled networks, support for DCF is required for all 802.11 networks.




2. Hybrid Coordination Function (HCF)


  • As an optional access method that may be used in addition to DCF, HCF was introduced to support QoS. 
  • HCF assimilated elements of both DCF and PCF mechanisms, creating a contention-based HCF method, called EDCA, and a contention-free HCF method, called HCCA.
  • EDCA inaugurated a means of prioritizing contention-based wireless medium (WM) access by classifying 802.11 traffic types by User Priorities (UP) and Access Categories (AC).
  • There are a total of 8 UPs, which map to 4 ACs. 
  • EDCA is used by stations that support QoS in a QoS BSS to provide prioritized WM access, but HCF is not used in non-QoS BSSs.


3.Point Coordination Function (PCF)


  • As an optional PCF is a contention-free access method.
  • PCF provides polling intervals to allow uncontended transmission opportunities for participating client devices.
  • In this approach, the AP of a BSS acts as a point coordinator (PC), initiating contention-free periods in which prioritized medium access is granted to clients, one at a time.
  • PCF has gone unused in 802.11 WLANs.

Summary

• DCF is the fundamental, required contention-based access service for all networks
• PCF is an optional contention-free service, used for non-QoS STAs
• HCF Contention Access (EDCA) is required for prioritized contention-based QoS services
• HCF Controlled Access (HCCA) is required for parameterized contention-free QoS services

802.11 Channel Access Mechanisms


Both contention-based access methods described previously (i.e. DCF and EDCA) employ similar mechanisms to moderate channel access and to minimize collisions.

Below is Outline for Channel Access:

1. STAs use a physical carrier sense (Clear Channel Assessment—CCA) to determine if the WM is busy.
2. STAs use virtual carrier sense (Network Allocation Vector—NAV) to detect if the WM is busy. When the virtual timer (NAV) reaches zero, STAs may proceed.
3. If conditions 1 and 2 are met, STAs wait the necessary IFS interval, as prescribed by the protocol.
4. If conditions 1 and 2 are met through the duration of condition 3, STAs generate a random backoff number in accordance with the range of allowed values.
5. STAs begin decrementing the backoff timer by one for every slot time duration that the WM is idle.
6. After decrementing the backoff value to zero, with an idle medium, a STA may transmit the allotted frame exchange, in accordance with the parameters of the obtained transmission opportunity.
7. If another STA transmits before Step 6 is completed, STAs observe steps 1, 2, 3, and 5 until the backoff timer is equal to zero.
8. After a successful transmission, repeat as needed.



Carrier Sense


  • 802.11 WLANs uses Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA).
  • For STAs to cooperate effectively on a half-duplex channel, each STA must be able to determine when the medium is clear, and when another device is actively transmitting. 
  • The physical carrier sense uses the physical radio interface to sample the wireless medium to detect transmissions. Virtual carrier sense refers to the use of Duration values in the MAC header of a frame and NAV timers to virtually determine if another STA is transmitting.
  • These two mechanisms determine the status of the medium, whether idle or busy.


Physical Carrier Sense


  • The physical carrier sense mechanism defined by IEEE is known as clear channel assessment (CCA).
  • CCA is the physical measurement taken by a radio interface to determine if the wireless medium is currently in use. The physical layer, which can be divided into two sublayers—the physical medium dependent (PMD, lower sublayer) and the physical layer convergence procedure (PLCP, upper sublayer)—performs this task and communicates this information to the MAC. 
  • According to the CCA mode in use, the PMD issues service primitives to the PLCP sublayer indicating whether the wireless medium is in use. 
  • The PLCP sublayer then communicates with the MAC layer to indicate a busy or idle medium, which prevents the MAC from attempting to forward a frame for transmission.
  • Each PHY within the 802.11-2007 standard dictates the specific operations and signal thresholds used to carry out the CCA mechanism.
  • CCA can be divided into two separate processes: Energy detection (ED) and carrier sense (CS). 
  • ED functionality is based upon raw RF energy. When an energy level detected in the channel crosses a certain threshold for a certain period of time, the “busy medium” indication will be triggered.
  • On the other hand, CS—more precisely, “preamble detection”—monitors and detects 802.11 preambles, which are used to trigger the CCA mechanism and indicate a busy medium.



Virtual Carrier Sense




  • Virtual carrier sense mechanism is defined in addition to the physical carrier sense.
  • Virtual carrier sense uses information found in 802.11 frames to predict the status of the wireless medium.
  • This is performed by means of the network allocation vector (NAV), which is a timer that is set using the duration values in the MAC header of a frame. 
  • Each frame contains a duration value, indicating the time required for a station to complete the conversation. 
  • All STAs use these duration values to set their NAV, and then they count down the NAV timer, waiting for the medium to become available.


See the Duration Filed in MAC header in below Pic (second filed)






  • All STAs attempt to process all frames—and a minimum of the first in a frame exchange—on their channel.
  • The first frame in a frame exchange is significant because it can, and sometimes must, be used to determine how long a given transmission opportunity will occupy the wireless medium.

See below wireless capture for Duration value







  • The MAC header of each frame contains a Duration field, which indicates the amount of time necessary to complete the entire frame exchange, or the entire TXOP duration. 
  • In DCF, a transmission opportunity only allows for the transmission of one frame, thus the Duration value represents the required IFS interval and the acknowledgement frame (ACK), if one is required.
  • The exception to this rule is for networks in which RTS/CTS or CTS-to-self protection is enabled. In this case, the transmission opportunity allows for the use of these frames. 
  • In HCF, several frames may be transmitted within a transmission opportunity. Thus, the Duration value refers to the TXOP duration. 
  • In either case, non-transmitting STAs must remain idle while the medium is reserved.
  • When STAs read the Duration value in a frame, they set their NAV timer accordingly and count down this duration.
  • The duration value in the MAC header indicates the time required to complete the transmission opportunity after the current—the frame in which the Duration value resides—frame if completed. 
  • If a STA is counting down its NAV and it receives another frame with a longer duration (would increase its NAV), the STA increases its NAV accordingly.
  • Conversely, when a STA receives a frame with a shorter duration value (would decrease its NAV), the STA ignores this value and continues to observe the longer NAV duration.


See below RTS / CTS exchange





  • In networks where mixed PHY technologies are supported, protection mechanisms are enabled to satisfy the equirements of frame processing and adherence to the common channel access protocol.
  • Frames used as a protection mechanism (often an RTS/CTS exchange or CTS-to-Self) are transmitted at a common rate understood by all PHYs in the network.
  • Legacy PHY STAs read the Duration value in the protection frame(s), set their NAV timer.


Interframe Spacing (IFS)


  • After each frame transmission, 802.11 protocols require an idle period on the medium, called an interframe space (IFS). 
  • The length of the IFS is dependent upon a number of factors, such as the previous frame type, the following frame type, the coordination function in use, the access category of the following frame (in a QoS BSS), as well as the PHY type.
  • The purpose of an IFS is both to provide a buffer between frames to avoid interference as well as to add control and to prioritize frame transmissions.
  • Each IFS “is the time from the end of the last symbol of the previous frame to the beginning of the first symbol of the preamble of the subsequent frame as seen at the air interface




In other words, the IFS interval is observed beginning with the completion of the previous frame. The length of each IFS interval, excluding AIFS, is fixed for each PHY

Short Interframe Space (SIFS)



  • SIFS are used within all of the different coordination functions.
  • For 802.11-2007, SIFS is the shortest of the IFSs and is used prior to ACK and CTS frames as well as the second or subsequent MPDUs of a fragment burst.
  • However, with 802.11n, a shorter IFS (RIFS) was introduced.
  • The IEEE explains the use of SIFS accordingly:

                              “SIFS shall be used when STAs have seized the medium and need to keep it for the duration of the frame exchange sequence to be performed. Using the smallest gap between transmissions within the frame exchange sequence prevents other STAs, which are required to wait for the medium to be idle for a longer gap, from attempting to use the medium, thus giving priority to completion of the frame exchange sequence in progress.


  • SIFS is used as a priority interframe space once a frame exchange sequence has begun. 
  • This is true when multiple frames are transmitted within a TXOP (as with frame bursting) and it is also true when a single frame is transmitted (as with typical data-ack exchanges).

PCF Interframe Space (PIFS)

  • PIFS are used by STAs during the contention-free period (CFP) in PCF mode. 
  • Because PCF has not been implemented in 802.11 devices, you will not see PIFS used for this purpose. 
  • However, PIFS may be used as a priority access mechanism for Channel Switch Announcement frames, as used to meet DFS requirements. 
  • In order to gain priority over other STAs during contention, the AP can transmit a Channel Switch Announcement frame after observing a PIFS.

DCF Interframe Space (DIFS)

  • When a STA desires to transmit a data frame (MPDU) or management frame (MMPDU) for the first time within a DCF network, the duration of a DIFS must be observed after the previous frame’s completion.
  • The duration of a DIFS is longer than both the SIFS and PIFS.

Arbitration Interframe Space (AIFS)

  • The AIFS shall be used by QoS STAs to transmit all data frames (MPDUs), all management frames (MMPDUs), and the following control frames: PS-Poll, RTS, CTS (when not transmitted as a response to the RTS), BlockAckReq, and BlockAck (when not transmitted as a response to the BlockAckReq) With EDCA.
  • The basic contention logic is the same as with non-QoS networks, but in order to facilitate QoS, there are some notable differences.
  • While DCF can designate a single DIFS value for each PHY, EDCA establishes unique AIFS durations for access categories (AC). 
  • For this reason, an AIFS is typically notated as an AIFS[AC]. 
  • QoS STA’s TXOPs are obtained for a specific access category, so delineation between ACs must be made.
  • For improved control of QoS mechanisms, AIFS values are user-configurable. 
  • By default, QoS APs announce an EDCA parameter set in the Beacon frame that notifies stations in the BSS about QoS values.
  • By changing these values in the AP configuration, the AP will broadcast a different set of parameters to the BSS.

Extended Interframe Space (EIFS)

  • The EIFS value is used by STAs that have received a frame that contained errors. 
  • By using this longer IFS, the transmitting station will have enough time to recognize that the frame was not received properly before the receiving station commences transmission.
  • If, during the EIFS duration, the STA receives a frame correctly , it will resume using DIFS or AIFS, as appropriate.

Reduced Interframe Space (RIFS)

  • RIFS were introduced with 802.11n to improve efficiency for transmissions to the same receiver in which a SIFS-separated response is not required, such as a transmission burst.


See below Pic for IFS comparison





  • The graphic demonstrates the relationship between the different IFS intervals.
  • You will notice that the initial frame (“Busy Medium”) transmission is preceded by a DIFS or AIFS.
  • The graphic shows the relative relationship of the IFS lengths.
  • SIFS are the shortest IFS (excluding 802.11n’s RIFS) and PIFS are second shortest, while DIFS and AIFS take up the caboose.
  • SIFS, PIFS, and RIFS are used to provide priority access for a given type of frame, which eliminates the need for added contention


Calculating an Interframe Space


The 802.11-2007 specification provides the information necessary for us to calculate the durations for each IFS.
As noted previously, SIFS, PIFS, and DIFS are fixed values for each PHY, while AIFS will vary in accordance with the AC in use.
EIFS are fixed per PHY in DCF networks, but vary when used with EDCA.
The formulas and components used for SIFS, PIFS, DIFS, EIFS, and AIFS calculations are as follows:

aSIFSTime = aRxRFDelay + aRxPLCPDelay + aMACProcessingDelay + aRxTxTurnaroundTime
aSlotTime = aCCATime + aRxTxTurnaroundTime + aAirPropagationTime+ aMACProcessingDelay.

The “aSIFSTime” is the same as a SIFS, measured in microseconds (µs). Similarly, the “aSlotTime” is the same as a slot time. Both of these values are provided for each PHY in the 802.11 specification.

PIFS = aSIFSTime + aSlotTime
DIFS = aSIFSTime + 2 × aSlotTime

Given that the SIFS and slot time values are provided for us in the standard, these calculations are pretty simple.

See below IFS calculations



EIFS (DCF) = aSIFSTime + DIFS + ACKTxTime

In this formula, the “ACKTxTime” is the amount of time it takes to transmit an ACK frame at the lowest mandatory rate in the BSS.

EIFS (EDCA) = aSIFSTime + AIFS[AC] + ACKTxTime

The EIFS (EDCA) formula mirrors the same for DCF, but replaces the DIFS with the appropriate AIFS[AC].

An AIFSN is a number (AIFS Number) value that is user-configurable and determines the brevity (or length) of an AIFS interval. AIFSN values are set for each access category, giving the AIFS[AC] a shorter or longer duration, in accordance with the desired priority.

This is demonstrated by the AIFS[AC] formula:
AIFS[AC] = AIFSN[AC] × aSlotTime + aSIFSTime

Friday, August 17, 2012

WMM and U-APSD Power save mode

                                       For understanding WMM power save , its mandatory to understand QOS (WMM). Wireless networks have been widely adopted by all kinds of users. New applications which use video and multimedia streaming brought challenging quality of service (QoS) requirements like youtube and facebook.The growing demand for bandwidth has caused network congestion and slowdowns, but all users want multimedia distribution to work perfectly. These requirements have triggered the development of a QoS enhancement for the Wireless LAN.


 See the MAC header change in 802.11 Frame as below.



                 The CSMA/CA technique intended to provide fair and equal access to all devices. It is essentially a "listen-before-talk" mechanism. When networks become overloaded, the performance becomes uniformly poor for all users and all types of data. QoS modifies the access rules to provide a useful form of "controlled unfairness." Data that is identified as having a higher priority is given preferential access to the medium. It will therefore gain access at the expense of the lower priority traffic. Like Whenever we send data with FTP and other Voice call, Voice call will get get prrority compare to data.







EDCA  access is an extension of the legacy CSMA/CA DCF mechanism to include priorities. The contention window and backoff times are adjusted to change the probability of gaining medium access to favor higher priority classes. A total of eight user priority levels are available. Each priority is mapped to an access category, which corresponds to one of four transmit queues.

Each queue provides frames to an independent channel access function, each of which implements the EDCA contention algorithm. When frames are available in multiple transmit queues, contention for the medium occurs both internally and externally, based on the same coordination function, so that the internal scheduling resembles the external scheduling. Internal collisions are resolved by allowing the frame with higher priority to transmit, while the lower priority invokes a queue-specific backoff as if a collision had occurred. 

The parameters defining EDCA operation, such as the minimum idle delay before contention, and the minimum and maximum contention windows, are stored locally at the QSTA. These parameters will be different for each access category (queue) and can be dynamically updated by the QoS access point (QAP) for each access category through the EDCA parameter sets. 

These are sent from the QAP as part of the beacon, and in probe and re-association response frames. This adjustment allows the stations in the network to adjust to changing conditions, and gives the QAP the ability to manage overall QoS performance. 

Under EDCA, stations and access points use the same access mechanism and contend on an equal basis at a given priority. A station that wins an EDCA contention is granted a TXOP-the right to use the medium for a period of time. 

The duration of this TXOP is specified per access category, and is contained in the TXOP limit field of the access category (AC) parameter record in the EDCA parameter set. A QSTA can use a TXOP to transmit multiple frames within an access category. 

If the frame exchange sequence has been completed, and there is still time remaining in the TXOP, the QSTA can may extend the frame exchange sequence by transmitting another frame in the same access category. The QSTA must ensure that the transmitted frame and any necessary acknowledgement can fit into the time remaining in the TXOP




















WMM power save mode (U-APSD)

  • WMM power save mode is the enhancement to the  legacy power save mode.
  • WMM Power Save was mainly designed for mobile and cordless phones that support VoIP.
  • WMM Power Save promotes more efficient and flexible over-the-air transmission and power management by  enabling individual applications to control capacity and latency requirements
  • VOIP applications are extremly sensitive to the delays.
  • Incrased Latency is the side effect of Power save mode and WMM power save will address this.
  • The application-based approach used in WMM Power Save enables individual applications to decide how often the  client needs to communicate with the access point and how long it can remain in a dozing state.
  • In legacy power save mode it is based on listen interval irrespective of  active applications.
  • Applications that do not initiate power save can still coexist with WMM Power Save enabled applications on the same device.
  • In this case, data from the other applications will be delivered with legacy power save, while WMM Power Save  applications will still enjoy its additional functionality as long as the access point also supports WMM Power Save.

            The core technology used by WMM and WMM Power Save depend on enhancements to the 802.11 Media Access Control (MAC) layer.
WMM categorized Wi-Fi traffic in to four diffrent access categories
  1.  Voice
  2.  Video
  3.  Best Effort
  4.  Background
 For example, in a Wi-Fi network with WMM, voice receives priority over all other types of traffic, thus improving the performance of voice applications.
 WMM Power Save is based on Unscheduled Automatic Power Save Delivery (U-APSD)

 WMM Power Save improves the efficiency of legacy power save by increasing the amount of time the client is allowed to doze and by decreasing the number of frames that a client needs to send and receive, in order to download the same number of frames buffered by the access point as before. It consists of a signaling mechanism added to WMM that enables the access point to buffer data frames and send them to the client upon its request.

 Power save behavior is negotiated during the association of a client with an access point. WMM Power Save or legacy power save is set for each WMM AC (voice, video, best effort, background) transmit queue separately1. For each AC queue, the access point will transmit all the data using either WMM Power Save or legacy power save using the WMM QoS mechanism.
While clients using legacy power save need to wait for the beacon frame to initiate a data download, WMM Power Save clients can initiate the download at any time, thus allowing more frequent data transmission for applications that require them.
There are two ways in which the access point may send the buffered data frames to the client. If the data belongs to a legacy power-save queue, transmission follows legacy power save. If the data belongs to a WMM Power Save queue, data frames are downloaded according to a trigger-and-delivery mechanism.


The client sends a trigger frame on any of the ACs using WMM Power Save to indicate that it is awake and ready to download any data frame that the access point may have buffered. Unlike with legacy power save, the trigger frame can be any data frame, thus eliminating the need for a separate PS-poll frame which contains only signaling data.
After the client has sent a trigger frame, the access point acknowledges it is ready to send the data. Data frames are sent during an EDCA Transmit Opportunity (TXOP) burst, with each data frame interleaved with an acknowledgement frame from the client. On the last data frame, the access point indicates that no more data frames are available and the client can revert to its dozing state.



U-APSD steps :


1. The procedures apply to unicast QoS-Data and QoS-Null frames that are to be delivered to a WMM STA when the STA is in PS-mode. U-APSD shall only be used to deliver unicast frames to a WMM STA. Broadcast/multicast frame delivery shall follow the legacy frame delivery rules .

2. The WMM power-save procedures are based on the legacy procedures, but an option for unscheduled automatic power-save delivery (U-APSD) is added. WMM APs capable of supporting U-APSD shall signal this capability through the use of the U-APSD subfield (b7) in the QoS Info Field in Beacon, Probe Response and (Re)Association Response management frames.

3. In order to configure a WMM AP to deliver frames, the WMM STA designates one or more of its
ACs to be delivery-enabled ACs and one or more of its AC to be trigger-enabled ACs. A WMM
STA may configure a WMM AP to use U-APSD using two methods.

4. First, a WMM STA may set individual U-APSD Flag bits (b3~b0) in the QoS Info field of the WMM Information element carried in (re) association request frames (see §2.2.1). When a U-APSD Flag bit is set to 1, it indicates that the corresponding AC is both a delivery-enabled AC and trigger-enabled AC. When a U-APSD Flag bit is set to 0, it indicates that the corresponding AC is neither a deliver-enabled AC nor a trigger-enabled AC. When all four U-APSD Flag subfields are set to 1 in the most recent (re) association request frames, all the ACs associated with the WMM STA are trigger-enabled ACs and delivery-enabled ACs upon successful (re) association. When all four U-APSD Flag subfields are set to 0 in (re) association request frames,the ACs associated with the WMM STA are neither trigger-enabled ACs nor delivery-enabled ACs upon successful (re) association.

5. Alternatively, a WMM STA may request one or more AC as a trigger-enabled AC and one or more AC as delivery-enabled ACs by sending an ADDTS request per AC to the WMM AP with the PSB subfield (b10) in the TS Info field in the TSPEC element. In an ADDTS Response WMM AP must preserve the setting of the PSB subfield from the ADDTS Request. Requests to designate an AC as a delivery-enabled AC or trigger-enabled AC are admitted when the Status Code is equal to 0 in an ADDTS response. A WMM STA may request an AC to be a triggerenabled AC with a TSPEC with the PSB subfield set to 1 in the uplink direction. A WMM STA may request an AC to be a delivery-enabled AC with a TSPEC with the PSB subfield set to 1 in the downlink direction. A bi-directional TSPEC with the PSB subfield set to 1, makes an AC both a trigger-enabled AC and delivery-enabled AC. A bi-directional TSPEC with the PSB subfield set to 0, makes that AC neither a trigger-enabled AC nor a delivery-enabled AC.

6. APSD settings in an admitted TSPEC take precedence over the static U-APSD settings carried in the WMM Information element in the most recent (re) association request. 

Below is U-APSD operation with MoreData=1 

7. In other words, an admitted TSPEC overwrites any previous UAPSD setting of an AC. An acknowledged DELTS for a bi-directional TS or a sole unidirectional TS for an AC reverts that AC to the static U-APSD settings carried in the WMM Information element in the most recent (re) association request. 

8. If there are two admitted unidirectional TSs in an AC, an acknowledged DELTS for one of the TSs results in a U-APSD setting for the AC per the PSB bit from the TSPEC of the remaining TS.

9.  WMM STAs use the Power Management field (b12) in the frame control field  of a frame
to indicate whether it is in active or power-save mode. As U-APSD is a mechanism for the
delivery of downlink frames to powersaving stations, the uplink frames sent by a WMM STA
using U-APSD shall have the Power Management bit in the frame control field set to 1 for
buffering to take place at the WMM AP. WMM STAs may use U-APSD to have some or all
frames of delivery-enabled ACs delivered during Unscheduled Service Periods (USPs). A WMM
STA chooses legacy versus U-APSD behavior on a per-AC basis.

10. If, for a particular WMM STA, an AC is not a delivery-enabled AC, then all downlink frames
destined to that WMM STA that map to that AC are buffered and delivered using the procedures
described in [1]. The buffer used to hold these frames will be referred to as the legacy PS buffer.
The WMM AP uses the TIM and the More Data bit (b13) carried in Frame Control Field to
indicate the status of the legacy PS buffer .

11 Transmission of a Trigger Frame is not implicitly allowed by admission of a downlink TS. If the
Trigger Frame maps to an AC that has ACM=1, then the WMM STA must establish a suitable
uplink TS before sending Trigger Frames.

12 The WMM STA must remain awake as long as an USP is still in progress.


  • To ensure backward compatibility, the beacon frame contains TIM information for WMM Power Save frames only if all transmit queues are trigger-and-delivery enabled. If one or more transmit queues uses legacy power save, the beacon frame only contains legacy power-save TIM information.
  • VoIP application using WMM Power Save may save anywhere from 15 to 40% while keeping the impact on latency low.
  • Several changes over legacy power save make these improvements possible:
  •  The client can request a data download without having to wait for a beacon frame. This reduces latency for applications like VoIP that require low latencies and enables more efficient dozing periods when the client does not need to receive or transmit data
  • All downlink data frames are sent together in a fast sequence, thus reducing the number of frames required to receive the same amount of data.
  • The trigger frame in WMM Power Save is effectively a data frame, while the legacy PS-poll frame only includes signalling information. This effectively further reduces the number of frames sent by the client and it is particularly advantageous in applications like VoIP that need to send data frames and poll the access point very frequently.
  • Applications specify the power-save behaviour, thus increasing the flexibility in setting dozing periods and in sending trigger frames. As a result, applications like VoIP will poll the access point frequently during voice calls, while a data application may have longer dozing periods because it can better tolerate longer latencies
  • WMM Power Save can coexist with legacy power save. A WMM Power Save client will still work within a legacy network and run applications that do not support WMM Power Save. As a result, no upgrade is needed to accommodate WMM Power Save devices in existing networks, if WMM Power Save functionality is not required.
  • However to take advantage of the benefits of WMM Power Save, both the client and the access point need to be Wi-Fi CERTIFIED for WMM Power Save. This enables the client and the access point to negotiate power-save behaviour upon association. The presence of other clients in the network that are not Wi-Fi CERTIFIED for WMM Power Save does not affect the use of WMM Power Save for those devices that support it 

Thursday, August 2, 2012

802.11n Features


                       As we are using more applications on the Clients like Laptops ,Smart phones and other wi-fi enabled devices with Voice and Video. The demand for Higher throughput is going on. Its like seeing some movie on these devices by connecting through the Wi-Fi. So they come up with new ammendment 802.11n to get higher throughputs to overcome growing demand of Higher Throughput.

Below are some of the main Modifications in 802.11n

1. MIMO(multiple input, multiple output antennas)
2. Frame aggregation A-MPDU & A-MSDU
3. Channel bonding (40 Mhz channels)
4. Block Acknowledgments (BLOCK ACK)
5. Modulation & Coding scheme (MCS)
6. Short Guard Interval (short GI)
7. 802.11n interoperability (HT mode)
8. RIFS
9. HT power management


Below are the ways  throughput Improvements are happened in 11n 

  •     The number of OFDM data sub-carriers is increased from 48 to 52 which improves the maximum throughput  from 54 to 58.5 Mbps
  •    Forward Error Correction (FEC) is a system of error control whereby the sender adds redundant data to allow the receiver to detect and correct errors. 3/4 coding rate is improved with 5/6 boosting the link rate from 58.5 to 65 Mbps
  •   The Shorter Guard Interval(GI) between OFDM symbols is reduced from 800ns to 400ns and increases throughput from 65 to 72.2 Mbps Channel
  •   Doubling channel bandwidth from 20 to 40 MHz slightly more than doubles rate from 72.2 to 150 Mbps.
  •   Support of up to four spatial streams (MIMO) increases throughput up to 4 times 150 to 600 Mbps
  •    802.11n maintain backward compatibility with existing IEEE 802.11a/b/g.

1.MIMO(multiple input, multiple output antennas)


  802.11n has ability to receive and/or transmit simultaneously through multiple antennas. 802.11n defines many "M x N" antenna configurations, ranging from "1 x 1" to "4 x 4".

MIMO antennas example

   This refers to the number of transmit (M) and receive (N) antennas – for example, an AP with two transmit and   three receive antennas is a "2 x 3" MIMO device.
 The more antennas an 802.11n device uses simultaneously, the higher its maximum data rate. 














 802.11n uses advanced signal processing techniques
  










a. Spatial Multiplexing (SM)

  •      Spatial Multiplexing (SM) subdivides an outgoing signal stream into multiple pieces, transmitted through different antennas. 
  •      Because each transmission propagates along a different path, those pieces – called spatial streams – arrive with  different strengths and delays.
  •      Multiplexing two spatial streams onto a single channel effectively doubles capacity and thus maximizes data rate.
  •      All 802.11n APs must implement at least two spatial streams,up to a maximum of four.
  •      802.11n stations can implement as few as one spatial stream.

  b. Space-Time Block Coding (STBC)

  •     Space-Time Block Coding (STBC) sends an outgoing signal stream redundantly,using up to four differently-coded spatial streams, each transmitted through a different antenna. 
  •   By comparing arriving spatial streams, the receiver has a better chance of accurately determining the original signal stream in the presence of RF interference and distortion.
  •   That is, STBC improves reliability by reducing the error rate experienced at a given Signal to Noise Ratio (SNR). This optional 802.11n feature may be combined with SM.

  c. Transmit Beamforming (TxBF) 

  •     Transmit Beam-forming (TxBF)  steers an outgoing signal stream towards the intended receiver by concentrating transmitted RF energy in a given direction.
  •   This technique leverage additive and destructive environmental impacts
  •   This optional 802.11n feature is not yet widely implemented.

2. Frame aggregation A-MPDU & A-MSDU

  •     Frame Aggregation increases the payload that can be conveyed by each 802.11 frame, reducing MAC layer overhead from a whopping 83 % to as little as 58 % (using A-MSDU) and 14 % (when using A-MPDU).
  •   Legacy 802.11a/g devices can send no more than 2304 payload bytes per frame.
  •   But new 802.11n devices have the option of bundling frames together for transmission, increasing payload size to reduce the significance of the fixed overhead caused by inter-frame spacing and preamble. 
  
  There are two aggregation options:

 MAC Service Data Unit Aggregation (A-MSDU):







  •   
  •  Groups logical link control packets (MSDUs) with the same 802.11e Quality of Service,independent of source or destination.
  •     The resulting MAC frame contains one MAC header, followed by up to 7935 MSDU bytes.
  •     Whole frame must be retransmitted if no acknowledge .












MAC Protocol Data Unit Aggregation (A-MPDU):











  •  Multiple Ethernet frames for a common destination are translated to 802.11 format and sent as burst.
  •  Complete MAC frames (MPDUs) are then grouped into PHY payloads up to 65535 bytes.
  •  Elements of an A-MPDUs burst can be acknowledged individually with one single Block-Acknowledge
  •  Only not-acknowledged A-MPDUs are retransmitted













 3. Channel bonding (40 Mhz channels)

  •    Legacy 802.11 products use channels that are approximately 20 MHz wide.
  •    New 802.11n products can use 20 or 40 MHz wide channels in either the ISM or UNII band.
  •    802.11n WLANs will uses 40 MHz channels mainly in the 5 GHz UNII band.















4. Block Acknowledgements (BLOCK ACK)

  •   Rather than sending an individual acknowledge following each data frame, 802.11n introduces the technique of confirming a burst of up to 64 frames with a single Block ACK(BA) frame
  •   The Block ACK even contains a bitmap to selectively acknowledgeindividual frames of a burst (comparable to selective acknowledges of TCP)
  •   The use of combined acknowledges can be requested by sending a Block ACK Request(BAR)
  •   The Block-ACK options are negotiated and confirmed with ‘Action’ framesdefined in 802.11e (WLAN QoS)














5. Modulation & Coding scheme (MCS)

  •   802.11n APs and stations need to negotiate capabilities like the number of spatial streams and channel width.
  •   They also must agree upon the type of RF modulation,coding rate, and guard interval to be used. 
  •   The combination of all these factors determines the actual PHY data rate, ranging from a minimum 6.5 Mbps to a   maximum 600 Mbps




6. Short Guard Interval (short GI)

  •    Guard Interval is the time between transmitted symbols (the smallest unit of data sent at once).
  •    This Guard Interval is necessary to offset the effects of multipath that would otherwise cause Inter-Symbol Interference (ISI).
  •    Legacy 802.11a/g devices use an 800 ns guard interval, but 802.11n devices have the option of pausing just 400 ns. 
  •    Shorter Guard Intervals would lead to more interference and reduced throughput, while a longer Guard Interval would lead to unwanted idle time in the wireless environment. 
  •    A Short Guard Interval (SGI) boosts data rate by 11 percent while maintaining symbol separation sufficient for most environments.

7. 802.11n interoperability and coexistence

  
  •   Given the fact that millions of legacy 802.11a/b/g devices have been deployed to date, and that those devices operate in the same frequency bands used by 802.11n,enabling coexistence is critical.
  •  802.11n deployments must therefore be able to "play nicely" with 802.11a/b/g, both by limiting 802.11n impact on nearby legacy WLANs and by enabling communication with legacy stations. 
  •  These goals are accomplished using HT Protection and Coexistence mechanisms.

   a. High Throughput (Greenfield) Mode:

  •         There are three 802.11n operating modes: HT, Non-HT, and HT Mixed.
  •     An 802.11n AP using High Throughput (HT) mode – also known as Greenfield mode – assumes that there are no nearby legacy stations using the same frequency band.
  •     If legacy stations do exist, they cannot communicate with the 802.11n AP. HT mode is optional.

      b. Non-HT (Legacy) Mode :

  •    An 802.11n AP using Non-HT mode sends all frames in the old 802.11a/g format so that legacy stations can understand them. 
  •    That AP must use 20 MHz channels and none of the new HT features .
  •    All products must support this mode to ensure backward compatibility, but an 802.11n AP using Non-HT delivers no  better performance than 802.11a/g.

   c. HT Mixed Mode :

  •    The mandatory HT Mixed mode will be the most common 802.11n AP operating mode for the next year or so. 
  •    In this mode, HT enhancements can be used simultaneously with HT Protection mechanisms that permit communication  with legacy stations.
  •    HT Mixed mode provides backwards compatibility, but 802.11n devices pay significant throughput penalties as compared to Greenfield mode.

8. RIFS 

  •    RIFS is a means of reducing overhead and thereby increasing network efficiency.
  •    RIFS may be used in place of SIFS to separate multiple transmissions from a single transmitter, when no  SIFS-separated response transmission is expected. 
  •    RIFS shall not be used between frames with different RA values.
  •    The duration of RIFS is defined by the aRIFS PHY characteristic .
  •    A STA shall not allow the space between frames that are defined to be separated by a RIFS time, as measured on the medium, to vary from the nominal RIFS value (aRIFSTime) by more than ± 10% of aRIFSTime. 
  •    Two frames separated by a RIFS shall both be HT PPDUs.

9. HT powermanagment

  •  802.11n introduces two powersaving mechanisms that can be used by HT clause radios
  •  802.11n radios still support legacy power save mode.
  •  The first new power save management is called Spatail Multiplexing PowerSave mode (SM power save)
  •  The purpose of SM powersave is powerdown the all but one of its radios
  •  For example a 4*4 MIMO device with four radio chains would power down three of its four radios
  •  The second Power save managment is Power save Multipoll (PSMP).
  •  PSMP is extention to the automatic power save mode
  •  U-PSMP is similar to U-APSD and uses trigger and deliver enabled mechanisms.
  •  Scheduled PSMP is similar to the S-APSD Power save 


Hope this will help as brief introduction.