CSOnet Hardware and Middleware



The actual hardware and middleware used to realize CSOnet are based, in part, on earlier experience with MICA2 sensor networks [1] [2] under DARPA’s NEST program as well as more recent experience obtained in building the prototype CSOnet described by Ruggaber et al. [3]. This has led to a number of important design decisions whose robustness and reliability have been established on an empirical basis using Ruggaber’s CSOnet prototype.



CSOnet Hardware

Many sensor networks developed by academia are based on some variation of U.C. Berkeley’s MICA2 module [4]. The MICA2 processor module provided a low cost module integrating an easily programmed microprocessor with sensor interface that could communicate with other MICA2 modules through an embedded radio transceiver. While the MICA2 provided a convenient platform for experimental sensor networks, it was not sufficiently rugged for long-term industrial applications such as the CSOnet system.

The basic building block of CSOnet’s WSAN is a more rugged version of the MICA2 processor module called the Chasqui wireless sensor node (see picture below). The Chasqui node was developed by EmNet LLC to address a number of reallife issues that were found in platforms such as the MICA2. These issues concerned the limited radio range of the MICA2 and the need for specialized sensor-actuator interface that had sufficient power to drive commercial off the shelf environmental sensors.

EmNet’s Chasqui node started with the original embedded node designs developed by U.C. Berkeley. EmNet enhanced the radio subsystem and sensor/actuator interface subsystems of this earlier design. The Chasqui node uses a 115 kbps MaxStream radio operating at 900 MHz. It uses frequency hopping spread spectrum (FHSS) signalling to reduce the radio’s sensitivity to interference. The radio has a larger maximum transmission power (1 watt) than the earlier MICA2 processor module. This means the Chasqui node has a range of over 700 meters in urban environments and up to a 5 km range for line-of-sight connections. The radio complies with FCC regulations for the use of license free ISM spectrum. The longer range of the Chasqui processor fits well with the distances required by the CSOnet application.

In spite of the higher transmission power required by the MaxStream module, careful design of the CSOnet middleware and hardware allows the WSAN’s based on the Chasqui node to operate for several years before changing batteries. The Chasqui node consumes up to 5W when fully active and drops down to 0.14 mW in sleep mode. By using a precision real time clock, Chasqui nodes can coordinate their active and sleep cycles with sufficient precision to reliably function at a 2 % duty cycle. With such a duty cycle, the CSOnet applications based on the Chasqui processor node have a service life in excess of three years with a 4 cell lithium battery pack.

Another major feature of the Chasqui processor is its more rugged sensor-actuator interface subsystem. The Chasqui node uses a highly efficient switching power supply that generates 3.5 V for the microprocessor , 5V for the radio, and 12V for the sensors. The sensor node’s interface also uses a MOSFET switch that allows the processor to completely switch off the sensor when not in use. This also helps minimize the module’s energy useage which helps prolong the sensor node’s service life.

CSOnet’s INode and RNode are both based on the Chasqui processor module. The difference between these two devices is that the INode has a sensor management subsystem, whereas the RNode requires no such subsystem (since it is only used to forward data). INodes are typically located within the sewer system’s manholes as shown below. The INode is attached to the manhole cover and is connected by cable to a pressure or flow sensor located within the sewer conduit. The INode transmits its sensor data out of the sewer manhole to an RNode that is usually located on a traffic or utility pole. Note that since conventional manhole covers are made of solid metal, radio waves have difficulty broadcasting out of the sewer system. Therefore as part of the CSOnet project our academic partners [4] at Purdue University have developed a composite fiber glass manhole cover with an embedded antenna that was specially designed to broadcast efficiently out of the manhole.

GNodes and ANodes consist of a Chasqui processor module that is interfaced to a single board computer (SBC) running linux. The SBC serves as a host computer that is connected directly to a wide area network (WAN) through either a hardwired ethernet connection, an 802.11 wireless card, or a cellular card. Data can then be transmitted between neighboring WSAN’s over these gateways. For the CSOnet system shown in figure 3, we see that the GNode is located at the CSO diversion structure. So in this case the GNode can also use the actuator interfaces on the Chasqui processor module to actuate valves controlling water flows into the interceptor line.




CSOnet Middleware:

Middleware is software that maintains a high level abstraction of the communication network that application software can use in a reliable manner. CSOnet’s distributed control algorithm (introduced below in section V) requires a network abstraction that includes fast and reliable nearest neighbor communication as well as services supporting less frequent multicasts of control messages. Four middleware services had to be developed to help maintain this network abstraction. These were 1) a networking service used to construct reliable local routing tables, 2) a routing service for directing messages towards the gateway, 3) a synchronization service that maintains a time-slotted network abstraction, and 4) a power management service to help coordinate the network’s waking and sleep cycles.

The underlying networking abstraction is a time-slotted network in which all nodes within the WSAN synchronize the times when they switch on (waking) and when they switch off (sleep). The synchronization of waking and sleeping periods must be very precise over extremely long periods of time. The heart of this is a clock synchronization middleware service similar to algorithms used in [5] that is triggered by synchronization beacons broadcast by the GNodes once per day. The GNodes themselves have their own clocks synchronized to the NIST timer server every six hours. The SBC in the GNodes access the NIST time server using standard Linux commands through the WAN interface. At a predefined time, the GNode broadcasts a synchronization message to the network. A network flooding layer in the Chasqui node’s communication stack disseminates these synchronization messages. The Chasqui nodes reset their internal clocks upon receiving this message. A small delay is introduced at each hop as the sync message propagates outwards. This cascaded delay does not affect communication since the inter-hop delay is approximately symmetric for a sender-receiver pair.

Since INodes and RNodes are usually emplaced in remote positions without access to external power, it is crucial that these devices be extremely miserly in their use of battery power. For CSOnet to be economically viable, these remote notes need to have a service life of 2-3 years between battery changes. CSOnet achieves this long service life through a power management middleware service. CSOnet’s power management service cycles all the nodes in a WSAN between waking and sleeping modes at a two percent duty cycle. Power management is performed through two mechanisms. First, every middleware component implements an interface that allows shutting down the associated components. The second mechanism allows the microprocessor to enter deep sleep mode for a predetermine time interval. This ”alarm clock” component uses an external timer to wake up the microprocessor. The application shuts down all of its components before using the ”alarm clock” component.

Since synchronization beacons only occur once a day, the clock drift must be very small to ensure that all nodes are awake at the same time. Typical Real Time Clock (RTC) crystal tolerances are in the order of 15ppm at nominal temperature (25C) yielding drifts of up to 1.3 seconds per day at nominal temperature and up to 3 seconds on the extreme temperatures typically found outdoors. The Chasqui node implements a precision RTC (Dallas DS3231) with a typical drift of only 2ppm giving CSOnet tight synchronism between clock updates.

CSOnet must maintain a network abstraction that forwards sensor data to a gateway for subsequent rebroadcast to other WSANs. CSOnet must therefore develop network and routing services enforcing this abstraction on a mesh radio network. Several approaches to the realization of mesh network communication protocols exist and have been researched by the sensor network community [6-11]. These protocols typically assume a dense node population and good internodal connectivity. CSOnet radically differs from this paradigm. Nodes in CSOnet are sparse and have poor internodal reception. This is due to the extensive geographical area covered and the urban environment. A particular approach called Stateless Gradient-Based Persistent Routing was chosen due to its low computational requirements and robustness properties.

Stateless Gradient-Based Persistent Routing establishes routes from the source to the destination by imposing a gradient structure on the network (in a similar fashion to [12] [13]). Each node in the network has a gradient number that is an indication of how close the node is to the destination. Since there might be several destinations, each node stores one gradient number per destination in the network. A destination initiates the generation of the gradient number by sending out a beacon message. As the beacon message travels outward from the data destination point, nodes receiving the beacon generate their gradient based on the number of hops traveled by the beacon and their previous gradient number. The beacon message is transmitted using the network flooding layer.

When a node in the network desires to transmit a data message it appends its own gradient information corresponding to the destination and the destination ID. The message is then sent to all neighbors. The message will be forwarded only by those neighboring nodes with lower gradient number than the transmitter. In this way, messages travel downgradient towards the destination. This method resembles the so called Directed Diffusion algorithm [13]. In order to increase reliability, anonymous acknowledgement messages are used for each forwarded message. If a forwarding node does not receive an acknowledgement that its message was heard by a lower gradient node, it will try to retransmit.

Since there is no explicit routing information generated, the computational complexity of the protocol is minimum as opposed to traditional Bellman-Ford or Dijkstras based approaches [14] [15]. Moreover, the network is inherently resilient to node failure as long as network remains connected with some link success probability. The number of retransmissions for lack of acknowledgement can be adjusted depending on those probabilities.




References:
  1. L. Seders, C. Shea, M. Lemmon, P. Maurice, and J. Talley, “Lakenet: an integrated sensor network for environmental sensing in lakes,” Environmental Engineering Science, vol. 24, no. 2, pp. 183–191, 2007.
  2. L. Fang, P. Antsaklis, L. Montestruque, B. McMickell, M. Lemmon, Y. Sun, H. Fang, I. Koutroulis, M. Haenggi, M. Xie, and X. Xie, “Design of a wireless dead reckoning pedestrian navigation system: the navmote experience,” IEEE Transactions on Instrumentation and Measurement, vol. 54, no. 6, pp. 2342–2358, 2006.
  3. T. Ruggaber, J. Talley, and L. Montestruque, “Using embedded sensor networks to monitor, control, and reduce cso events: A pilot study,” Environmental Engineering Science, vol. 24, no. 2, pp. 172–182, 2007.
  4. J. Mastarone and W. Chappell, “Urban sensor networking using thick slots in manhole covers,” in Proceedings of IEEE Antennas and Propagation Society International Symposium, 2006, pp. 779–782.
  5. S. Ganeriwal, R. Kumar, and M. Srivastava, “Timing-sync protocol for sensor networks,” in ACM Conference on Embedded Networked Sensor Systems (SENSYS 2003), 2003.
  6. I. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “A survey on sensor networks,” IEEE Communications Magazine, vol. 40, no. 8, pp. 102–114, 2002.
  7. W. Henzelman, A. Chandrakasan, and H. Balakrishnan, “Energy efficient communication protocol for wireless sensor networks,” in Proceedings of 33rd Hawaii International Conference on System Science (HICSS00), 2000.
  8. C. Intangaonwiwat, R. Govindan, and D. Estrin, “Directed diffusion: a scalable and robust communication paradigm for sensor networks,” in Proceedings of ACM MobiCom, 2000.
  9. W. Henzelman, J. Kulik, and H. Balakrishnan, “Adaptive protocols for information dissemination in wireless sensor networks,” in Proceedings of 5th ACM/IEEE MobiCOM Conf., 1999.
  10. A. Mangeshwar and D. Agarwal, “Apteen: a hybrid protocol for efficient routing and comprehensive information retrieval in wireless sensor networks,” in Parallel Distributed Processing Symposium (IPDPS), 2002.
  11. T. He, J. Stankovic, C. Lu, and T. Abedelzaher, “Speed: a stateless protocol for real-time communication in sensor networks,” in International Conference on Distributed Computing Systems, 2003.
  12. D. Braginsky and D. Estrin, “Rumor routing algorithm for sensor networks,” in Intl. Conf. Distributed Computing Syst. (ICDCS-22), 2001.
  13. M. Maroti, “Directed flood routing framework,” Technical Report ISIS-04-502, Institute for Software Integrated Systems, Vanderbilt University, 2004.
  14. C. Perkins and E. Royer, “Ad hoc on-demand distance vector routing,” in Proceedings of the 2nd IEEE Workshop on Mobile Computing Systems and Applications, 1999.
  15. D. Johnson and D. Maltz, “Source routing in ad hoc wireless networks,” in Mobile Computing, Ch. 5, pp. 153-181, 1996.