[Data-Related - Data Munging] Blogs:
[Images - FFT] Blogs:
[Images - FFT] Papers:
Summary of this post:
Zipf's law states that given some corpus of natural language utterances, the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc.: the rank-frequency distribution is an inverse relation. For example, in the Brown Corpus of American English text, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's Law, the second-place word "of" accounts for slightly over 3.5% of words (36,411 occurrences), followed by "and" (28,852). Only 135 vocabulary items are needed to account for half the Brown Corpus.
This paper presents a technology capable of recognizing a person's emotions by relying on wireless signals reflected off her/his body. We believe this marks an important step in the nascent field of emotion recognition. It also builds on a growing interest in the wireless systems' community in using RF signals for sensing, and as such, the work expands the scope of RF sensing to the domain of emotion recognition. Further, while this work has laid foundations for wireless emotion recognition, we envision that the accuracy of such systems will improve as wireless sensing technologies evolve and as the community incorporates more advanced machine learning mechanisms in the sensing process.
We also believe that the implications of this work extend beyond emotion recognition. Specifically, while we used the heartbeat extraction algorithm for determining the beat-to-beat intervals and exploited these intervals for emotion recognition, our algorithm recovers the entire human heartbeat from RF, and the heartbeat displays a very rich morphology. We envision that this result paves way for exciting research on understanding the morphology of the heartbeat both in the context of emotion-recognition as well as in the context of non-invasive health monitoring and diagnosis.
Google and others like Apple and Skyhook build a Database which links WLAN BSSIDs to a geographic location. A BSSID is like the MAC Address of a access point that gets broadcasted by that access point. It is therefore "public viewable" if the BSSID broadcast is enabled, which is the default for most access points. The BSSID operates on a lower layer as the IP stack, you don't even have to be connected to an access point to receive these broadcasts.
So, essentially, when you ARE using Wifi and GPS, Google's database of BSSID's is updated with a geographic location associated with that BSSID, as you assumed. In your case, your AP is sending beacons advertising its BSSID, and because it is already in Google's database, Google Maps knows where you are based on the location of that AP.
So it's not that the ISP is giving Google the location of their routers, its that your phone is helping to build a database of the Access Points around you, and Google uses this data for geolocation.
Sadly, even if you get a new router and keep any and all android devices away from it, they will still be able to approximate your location based on the cell towers you associate with (or maybe even your neighbors AP!), but it won't be nearly as accurate.
First, let's welcome our friends [footnote 2: Right off the bat, you're mad at me, so allow me to explain: I love bokeh and plotly. Indeed, one of my favorite things to do before sending out an analysis is getting "free interactivity" by passing my figures to the relevant bokeh/plotly functions; however, I'm not familiar enough with either to do anything more sophisticated. (And let's be honest - this post is long enough.) Obviously, if you're in the market for interactive visualizations (versus statistical visualizations), then you should probably look to them.]
matplotlib. The 800-pound gorilla - and like most 800-pound gorillas, this one should probably be avoided unless you genuinely need its power, e.g., to make a custom plot or produce a publication-ready graphic. (As we'll see, when it comes to statistical visualization, the preferred tack might be: "do as much as you easily can in your convenience layer of choice [i.e., any of the next four libraries], and then use matplotlib for the rest.")
pandas. "Come for the DataFrames; stay for the plotting convenience functions that are arguably more pleasant than the matplotlib code they supplant." - rejected pandas taglines (Bonus tidbit: the pandas team must include a few visualization nerds, as the library includes things like RadViz plots and Andrews Curves that I haven't seen elsewhere.)
Seaborn. Seaborn has long been my go-to library for statistical visualization; it summarizes itself thusly: "If matplotlib 'tries to make easy things easy and hard things possible,' seaborn tries to make a well-defined set of hard things easy too"
yhat's ggplot. A Python implementation of the wonderfully declarative grammar of graphics. This isn't a "feature-for-feature port of ggplot2," but there's strong feature overlap. (And speaking as a part-time R user, the main geoms seem to be in place.)
Altair. The new guy, Altair is a "declarative statistical visualization library" with an exceedingly pleasant API. Wonderful. Now that our guests have arrived and checked their coats, let's settle in for our very awkward dinner conversation. Our show is entitled
[ ... SNIP! ... ]
[6 x 109 people] / [70 yrs (lifespan)] / [365.25 d/yr] / [24 hr/d] / [60 min/hr] / [60 sec/min] = 2.716 deaths/sec
Bowtie: GitHub | demo | Python library for writing interactive visualizations [reddit]