Fall Detection

In this project, we are using radio frequency (RF) measurements in a wireless network to detect a person's pose (standing, sitting, lying down). In particular, we are interested if someone falls, that is, quickly falls from one position to a lying down position.

Publication

Brad Mager, Neal Patwari, and Maurizio Bocca, Fall detection using RF sensor networks, IEEE Personal, Indoor and Mobile Radio Communications Conference (PIMRC 2013), London, 9 Sept. 2013.

Brief Description of the Technology

There is a clear need for non-contact devices (that are not required to be worn) which automatically detect falls. One such technology is video surveillance in combination with computer vision algorithms that automatically determine that a person has fallen. However, seniors see this as too invasive of privacy and are generally unwilling to be recorded in their own home. Passive infrared (PIR) sensors, deployed in each room, are used to track a person’s current location. Lack of motion over a long period of time and other unusual activity is used in some products to infer that a fall has occurred. For example, the Quiet Care system (a partnership with Qualcomm), triggers a fall alert if a person leaves the bedroom at night and is gone for too long. However, moving to the couch to sleep would trigger a false alarm.

In contrast, our system is contact-free, minimally invasive, and a reliable method to detect falls anywhere in a home. Our system uses small, low-cost radio transceivers deployed around the home which use radio waves to sense presence. Specifically, because humans are mostly water, they alter the propagation of radio waves as they move, for example decreasing a transmitter’s signal at a receiver when directly in between the transmitter and receiver. By placing the transceivers at different heights and locations, the system directly measures a person’s presence in three dimensions. Radio waves are not blocked by (non-metal) walls or furniture, thus they can “see through” walls and obstructions, unlike video cameras or infrared sensors. Further they can be hidden behind walls or inside other objects, and thus be out-of-sight. Finally, because our system is unable to image any features less than about six inches, and does not record audio, the system is not as invasive of privacy as video.

Our technology is closest to Doppler radar sensors currently being developed for fall detection [Liu 2011], which also use RF to detect falls. In comparison, our system does not require radar devices. A Doppler radar can't distinguish someone sitting down quickly from someone falling from a chair to the floor quickly, because both have the same direction of motion.

Our method involves: (a) collecting received signal strength measurements in a wireless network, (b) estimating a three-dimensional map of the attenuation caused by the person's body, i.e., a radio tomographic image; (c) determining from the map the person's pose, and (d) when a person transitions to lying down from some other state, detecting a fall if the transition happens very quickly.

Images

Figure 1: The sensor nodes are deployed in two layers. This not-to-scale illustration shows how a person’s upper torso affects the RF signals in the upper layer more than in the lower layer.

Figure 2: In a two-level sensor network, the upper and lower images that result from processing the signal strength on the links in each level will look different when a person is standing up compared to a person lying down. This makes it possible for the system to recognize a person being in these two positions.

Figure 3: The data can be processed as five layers of radio tomographic imaging. This illustration shows what the images look like for each layer when a person is standing up inside the sensor network.

Figure 4: By representing each imaging layer as a cylinder, with the radius of each cylinder determined by the layer’s total attenuation, it’s possible to create an image that looks something like a person. The sphere on top is placed there for aesthetic purposes. We can "watch" the pose of a person over time via a video (example shown of a person walking around the area).

Figure 5: When the data are processed as five layers of radio tomographic imaging, the image slices will be different for a person in the three different vertical positions recognized by the system.