Profile cover photo
Profile photo
Ted Pavlic
9,095 followers -
Autonomy science and engineering researcher, husband, Android enthusiast, and bringer of the science.
Autonomy science and engineering researcher, husband, Android enthusiast, and bringer of the science.

9,095 followers
About
Ted Pavlic's posts

Post is pinned.Post has attachment
My #TEDxASU talk about the hidden costs of disciplinary identity is now available on YouTube.

[ #academia #STEM #interdisciplinary #science #technology #engineering #math #mathematics #undisciplined ]

Post has attachment
"Every winter thousands of giant Australian cuttlefish gather to breed in a stretch of shallow, rocky water off Point Lowly in South Australia. The phenomenon, known as an aggregation, is the only known instance of cuttlefish gathering in such large numbers – it is estimated there can be more than 150,000 in a 10km stretch of water – and has become a tourist as well as scientific attraction. This video, taken by mpaynecreative.tv, captures male cuttlefish as they display their brightest pigments in a bid to attract females. It is not known why the giant Cuttlefish aggregate in this area particularly but it is believed they are likely attracted to the shallow rocky area along the coast as it provides optimal habitat to lay their eggs."

Post has attachment
Here, Dr. Landsteiner said, string theory was used to calculate the expected anomaly. “It puts string theory onto a firm basis as a tool for doing physics, real physics,” he said. “It seems incredible even to me that all this works, falls all together and can be converted into something so down to earth as an electric current.”

Post has attachment
That's clever. Fiber optic loops baked into bar so that you get apparently random flickering as parts of your body shade different parts of the table.

I really hate the phrase "people who attend the same conference sessions as you" as an example of people you should suggest as a reviewer. I know the days of boring, marginal, monolithic, single-investigator research are still common, but we don't have to encourage proliferation of such practices with phrases like these.

If the only thing that appeals to you about a particular conference is a particular set of sessions that all of the same people go to, then maybe that's not a good conference to go to. That's all I'm saying...

#undisciplined

Post has attachment
I love #eduroam, but I think many academics have no idea that it exists! If you are at a University that is a member of the worldwide eduroam network, then you automatically have secure WiFi access when you travel to other universities. No need to look for guest networks or passwords. Just look for "eduroam" and use your university e-mail and password.

Post has attachment
Apparently someone got confused at the sheriff's office that claimed an Amazon Echo called them, but I think this Wired author, +Emily Dreyfuss​, is obsessing too much on the stand-alone assistant devices.

My Echo, Home, and sometimes my phone all have gotten accidentally triggered by what they thought was a wake word. In the case of a phone that is always listening for "Ok, Google" or "Hey, Siri" (etc), it seems much more plausible that 911 could actually get called.

Regardless of whether the stand-alone assistant devices can call 911, increasingly every cell phone is getting the same voice-activated features baked into them. If the story from the sheriff's was that a phone essentially "butt dialed" them, I don't think anyone would have doubted it.

So maybe the mission creeping future the Wired author speculates about is actually closer than she thinks (?).

Post has attachment
World's collide! I was surprised to see Radhika Nagpal's name pop up in a blog post by Melissa A. Wilson Sayres! I know both of these people -- they both do very exciting things, but they do very different things. I think it's great to have these two perspectives on tenure-track lives next to each other. Both are good reads.

Radhika's original blog post ("The Awesomest 7-Year Postdoc" (+Dr. Strangelove ref)):
https://blogs.scientificamerican.com/guest-blog/the-awesomest-7-year-postdoc-or-how-i-learned-to-stop-worrying-and-love-the-tenure-track-faculty-life/

Melissa's reflection after 3 years on the tenure track ("Not the awesomest 7-Year postdoc"):
http://mathbionerd.blogspot.com/2017/07/not-awesomest-7-year-postdoc.html

Post has attachment
Hold on to your butts. Backpropagation (and thus CNN's) comes to IBM's TrueNorth. Deep learning meets fast, energy-efficient, neuromorphic hardware.

Popular media: https://arstechnica.com/science/2017/07/pocket-brains-neuromorphic-hardware-arrives-for-our-brain-inspired-algorithms/

Primary source:

"Convolutional networks for fast, energy-efficient neuromorphic computing"
by Esser et al.
PNAS USA (2017), 113(41):11441–11446
http://www.pnas.org/content/113/41/11441.full

Abstract
=====
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
=====

Significance
=====
Brain-inspired computing seeks to develop new technologies that solve real-world problems while remaining grounded in the physical requirements of energy, speed, and size. Meeting these challenges requires high-performing algorithms that are capable of running on efficient hardware. Here, we adapt deep convolutional neural networks, which are today’s state-of-the-art approach for machine perception in many domains, to perform classification tasks on neuromorphic hardware, which is today’s most efficient platform for running neural networks. Using our approach, we demonstrate near state-of-the-art accuracy on eight datasets, while running at between 1,200 and 2,600 frames/s and using between 25 and 275 mW.
=====

Post has attachment
Very sad news.
Wait while more posts are being loaded