I'm impressed with the quantity and quality of videos created about people's research in device-free localization and radio tomographic imaging. I think it is time to group them together in some way so that someone interested in this area of research can find them all and watch them. I created a Youtube playlist called "RSS-based Device-Free Localization". When you run out of things to watch on Hulu and Netflix, take a view!
Also -- if you've put up a public video on the topic and want me to include it, post a note on the playlist.
In my research, I sometimes need to record video from an experiment. But in reality, I don't need multiple frames per second video, I just need a photo of the environment at a regular interval, like once per second. These photos allow me to go back in post-processing to figure out exactly where people were in the environment, for example. Here are the problems with using, for example, a Flip video recorder to record video:
Another problem with my web cam is that it hard to point it in the right direction to capture a picture of the environment. Instead, I use a usb-connected camera with a stand that I can point in whatever direction I want.
The solution I use to capture images at a regular interval is to use the "streamer" program. It will allow me to collect images at a set rate (with the -r
streamer -c /dev/video1 -t 600 -r 1 -f jpeg -o temp000.jpeg
The files will have a unique number as the last three digits of the file name, and the "date modified" field will show my laptop's system time when it was created.
I just dug out this photo from my phone: This is Joey Wilson (Xandem Tech, a former SPAN lab member) and Chris O from KBS TV, a Korean broadcasting station, and a cameraman from the station. Joey was being interviewed for a Korean documentary on radio waves, a documentary sponsored by the Korean institute of Radio (a government agency). This is a photo from October 26, 2010.
I'm a new dad these days and, I guess it goes without saying, sleep deprived. I have a wonderful baby daughter who is otherwise perfect (in my opinion) but who is not fond of sleeping. Which may make her great at all-nighters later in life but is something I'm not too fond of today.
One thing that has helped is having white noise in the background to cover up all of the other sounds going on in the house. I think it also has become part of the routine, so she knows that the background noise means that its time for sleep. There are many ways to make noise, and we've tried running the bathroom fan, or putting the radio on a channel with no station. But my favorite continues to be our white noise CD. It doesn't waste energy (it certainly uses energy, but not as much as a fan) and doesn't waste water (running a faucet).
My noise CD is "handmade" which really means computer made; and not by my computer, but by my students in 5510 from Fall 2006. I assigned them each the task of making a three-minute white noise sound track. And when I say "white noise", I only mean that in the common use of the term.
Actually, white noise is non-white noise. Seriously. I don't mean in the ethnic sense, I mean in the frequency domain sense. Pure white noise, that is, constant power spectral density (PSD) as a function of frequency, is awful to hear. Try it if you don't believe me. If you analyze one of the "Pure white noise" CD tracks on a commercial CD, you'd see that it is emphasizes some frequencies (lower bands) more than others (higher bands).
So the assignment was to design the linear-time invariant (LTI) filter which would produce a good-sounding white noise track, and to analyse the PSD that the filter would produce, using techniques learned in class.
In any case, when I assigned it, I didn't know how valuable these tracks would be for me today. I've probably played on my CD player (on the one-song-repeat option) a white noise track on the order of 10,000 times. So I owe a debt of gratitude to my students.
Since I've been thinking about this CD quite a bit, I thought I'd post and share the link in case someone else would benefit from a white noise CD. The tracks are in MP3 format so they're ready for your standard CD burner software. Your results may vary, and some of these tracks sound quite a bit better (to my ear) then the others, so not all of them are equal -- you might want to select your favorites and only burn those.
Part of my job is to train students to write research papers. This involves me "teaching" technical writing, which I am definitely not trained to do. So I don't know what general training to give students to improve their writing. However, there are a lot of do's and don'ts that I like and don't like, respectively. These are (mostly) easy rules to follow that a student can (and should) check on their own. Please add comments to this post if you think of other helpful do's and don'ts.
I just finished dusting off some Matlab code to estimate the entropy of English character sequences from a text source. In my opinion, this is a good tool to teach entropy rate. One might use the idea to calculate the entropy rate of another language, or other discrete-valued data source, like numerical data or twitter tweets. My code isn't particularly smart; my storage (and computation) is increasing as $L^N$ where $L$ is the number of characters considered and $N$ is the character sequence length. I'm sure someone more adept at programming can implement a more efficient version (perhaps a hash table?). However, the code does work, and computes the entropy for a sequence of characters (I've tested up to 4) from a given text file. I used Shakespeare's Romeo and Juliet, and found per-character entropies of 4.12, 3.73, 3.35, 2.99, for $L=$ 1, 2, 3, and 4, respectively. Info on how this is done is in my lecture 4 notes from today's Advanced Random Processes class; and the letter entropy Matlab code and Shakespeare text are also posted.
Automatic download of bibliographic information is a great tool for keeping track of what you've downloaded, read and reviewed, and learned from published research. My favorites are Zotero and Google Scholar's Bibliography Manager, which (if you change your Scholar Preferences) shows you the BibTex for an article. However, at this stage, I have this warning: it is not yet automatic. That is, don't just take the BibTex entry as supplied. I notice lots of errors and typos, and they make a reference list look unprofessional. People can tell when they read a paper with bib file that was generated automatically and never reviewed.
For example, Google Scholar has chosen a seemingly random set of words in conference titles to be left lowercase, for example, leaving a conference proceedings titled "Proceedings of the 2nd international conference on Multi-hop, ad Hoc, and mesh Networks". Fix this so that all words are first letter capitalized except for a few common short words (e.g., "a", "on", "and", "of", "the"). Second, these managers don't seem to know that just because the paper appears in Citeseer, that its publisher isn't Citeseer. Or that an ACM conference was probably not located in New York, NY, even though the ACM is headquartered in NYC. Delete any publisher information from a conference proceedings paper, except for when the publisher is part of the name (e.g., "Proceedings of the IEEE International Conference on Blah Blah Blah"). Next, capitalization in titles should be consistent -- the first word in the title capitalized, but no other title words capitalized (except for proper nouns, and acronyms). You may need to put extra curly brackets around each proper noun or acronym to keep the capitalization like you have written in the title field. And you may need to fix the rest of the capitalization in the title field.
Maybe this is just my pet peeve, but I wouldn't bet on it. Look at and fix all BibTex entries as you add them, until the day comes when bibliographic managers have better data.