Profile

Cover photo
Alison B. Lowndes
Works at Nvidia
Attended University of Leeds
Lives in Harrogate, UK
1,029 followers|81,738 views
AboutPostsPhotosYouTube+1'sReviews

Stream

Alison B. Lowndes

commented on a video on YouTube.
Shared publicly  - 
 
Who are the vocalists please?
1
Alison B. Lowndes's profile photoTheShreester's profile photo
3 comments
 
+Alison B. Lowndes Yeah, they were. Perhaps they're attached to the orchestra? I couldn't find their names online. Good luck!
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
Peering inside the creative process ....
1
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
The community of Kamangu, Kiambu County, Kikuyu grew up along the A104 NW out of Nairobi and lies in the incredible Rift Valley. Local people are relatively well off if they have a smartphone. Most struggle to buy kerosene for lamps to light their homes at night because…. No electricity. Yeah!? They are blessed, however, with Africa’s best mobile network and constant sunshine (except for the two annual rainy seasons). Internet.org and Facebook a...
1
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
Stunning ~40 year old album of #Kathmandu #Nepal!
Happy Birthday +Kevin Kelly & thanks for sharing x
 
Katmandu was an intensely ornate city that is easily damaged. The carvings, details, public spaces were glorious. My heart goes out to its citizens who suffer with their city. As you can see from these images I took in 1976, the medieval town has been delicate for decades. Loosely stacked bricks are everywhere. One can also see what splendid art has been lost. Not all has been destroyed, and I am sure the Nepalis will rebuild as they have in the past. Still, the earthquake shook more than just buildings. 

If you look carefully you may notice something unusual about these photos. They show no cars, pedicabs, or even bicycles. At the time I took these images, Katmandu was an entirely pedestrian city. Everyone walked everywhere. Part of why I loved it. That has not been true for decades, so this is something else that was lost long ago. Also missing back then was signage. There are few signs for stores, or the typical wordage you would see in any urban landscape today. Katmandu today is much more modern, much more livable, or at least it was. 

Blessings on you, Katmandu!
11 comments on original post
3
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
Sand dams + geoengineering + turning water and sunshine into food. Who's in? #kenya   #sustainabledevelopment   #Konza  
1
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
Thank you to +BBC One for publicly releasing this epic video. #StargazingLive  
1
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
Fantastic lead-in for new DL users.
2
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
 
Critique of Paper by "Deep Learning Conspiracy" (Nature 521 p 436)

Machine learning is the science of credit assignment. The machine learning community itself profits from proper credit assignment to its members. The inventor of an important method should get credit for inventing it. She may not always be the one who popularizes it. Then the popularizer should get credit for popularizing it (but not for inventing it). Relatively young research areas such as machine learning should adopt the honor code of mature fields such as mathematics: if you have a new theorem, but use a proof technique similar to somebody else's, you must make this very clear. If you "re-invent" something that was already known, and only later become aware of this, you must at least make it clear later.

As a case in point, let me now comment on a recent article in Nature (2015) about "deep learning" in artificial neural networks (NNs), by LeCun & Bengio & Hinton (LBH for short), three CIFAR-funded collaborators who call themselves the "deep learning conspiracy" (e.g., LeCun, 2015). They heavily cite each other. Unfortunately, however, they fail to credit the pioneers of the field, which originated half a century ago. All references below are taken from the recent deep learning overview (Schmidhuber, 2015), except for a few papers listed beneath this critique focusing on nine items.

1. LBH's survey does not even mention the father of deep learning, Alexey Grigorevich Ivakhnenko, who published the first general, working learning algorithms for deep networks (e.g., Ivakhnenko and Lapa, 1965). A paper from 1971 already described a deep learning net with 8 layers (Ivakhnenko, 1971), trained by a highly cited method still popular in the new millennium. Given a training set of input vectors with corresponding target output vectors, layers of additive and multiplicative neuron-like nodes are incrementally grown and trained by regression analysis, then pruned with the help of a separate validation set, where regularisation is used to weed out superfluous nodes. The numbers of layers and nodes per layer can be learned in problem-dependent fashion.

2. LBH discuss the importance and problems of gradient descent-based learning through backpropagation (BP), and cite their own papers on BP, plus a few others, but fail to mention BP's inventors. BP's continuous form was derived in the early 1960s (Bryson, 1961; Kelley, 1960; Bryson and Ho, 1969). Dreyfus (1962) published the elegant derivation of BP based on the chain rule only. BP's modern efficient version for discrete sparse networks (including FORTRAN code) was published by Linnainmaa (1970). Dreyfus (1973) used BP to change weights of controllers in proportion to such gradients. By 1980, automatic differentiation could derive BP for any differentiable graph (Speelpenning, 1980). Werbos (1982) published the first application of BP to NNs, extending thoughts in his 1974 thesis (cited by LBH), which did not have Linnainmaa's (1970) modern, efficient form of BP. BP for NNs on computers 10,000 times faster per Dollar than those of the 1960s can yield useful internal representations, as shown by Rumelhart et al. (1986), who also did not cite BP's inventors.

3. LBH claim: "Interest in deep feedforward networks [FNNs] was revived around 2006 (refs 31-34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR)." Here they refer exclusively to their own labs, which is misleading. For example, by 2006, many researchers had used deep nets of the Ivakhnenko type for decades. LBH also ignore earlier, closely related work funded by other sources, such as the deep hierarchical convolutional neural abstraction pyramid (e.g., Behnke, 2003b), which was trained to reconstruct images corrupted by structured noise, enforcing increasingly abstract image representations in deeper and deeper layers. (BTW, the term "Deep Learning" (the very title of LBH's paper) was introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000), none of them cited by LBH.)

4. LBH point to their own work (since 2006) on unsupervised pre-training of deep FNNs prior to BP-based fine-tuning, but fail to clarify that this was very similar in spirit and justification to the much earlier successful work on unsupervised pre-training of deep recurrent NNs (RNNs) called neural history compressors (Schmidhuber, 1992b, 1993b). Such RNNs are even more general than FNNs. A first RNN uses unsupervised learning to predict its next input. Each higher level RNN tries to learn a compressed representation of the information in the RNN below, to minimise the description length (or negative log probability) of the data. The top RNN may then find it easy to classify the data by supervised learning. One can even "distill" a higher, slow RNN (the teacher) into a lower, fast RNN (the student), by forcing the latter to predict the hidden units of the former. Such systems could solve previously unsolvable very deep learning tasks, and started our long series of successful deep learning methods since the early 1990s (funded by Swiss SNF, German DFG, EU and others), long before 2006, although everybody had to wait for faster computers to make very deep learning commercially viable. LBH also ignore earlier FNNs that profit from unsupervised pre-training prior to BP-based fine-tuning (e.g., Maclin and Shavlik, 1995). They cite Bengio et al.'s post-2006 papers on unsupervised stacks of autoencoders, but omit the original work on this (Ballard, 1987).

5. LBH write that "unsupervised learning (refs 91-98) had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning." Again they almost exclusively cite post-2005 papers co-authored by themselves. By 2005, however, this transition from unsupervised to supervised learning was an old hat, because back in the 1990s, our unsupervised RNN-based history compressors (see above) were largely phased out by our purely supervised Long Short-Term Memory (LSTM) RNNs, now widely used in industry and academia for processing sequences such as speech and video. Around 2010, history repeated itself, as unsupervised FNNs were largely replaced by purely supervised FNNs, after our plain GPU-based deep FNN (Ciresan et al., 2010) trained by BP with pattern distortions (Baird, 1990) set a new record on the famous MNIST handwritten digit dataset, suggesting that advances in exploiting modern computing hardware were more important than advances in algorithms. While LBH mention the significance of fast GPU-based NN implementations, they fail to cite the originators of this approach (Oh and Jung, 2004).

6. In the context of convolutional neural networks (ConvNets), LBH mention pooling, but not its pioneer (Weng, 1992), who replaced Fukushima's (1979) spatial averaging by max-pooling, today widely used by many, including LBH, who write: "ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012," citing Hinton's 2012 paper (Krizhevsky et al., 2012). This is misleading. Earlier, committees of max-pooling ConvNets were accelerated on GPU (Ciresan et al., 2011a), and used to achieve the first superhuman visual pattern recognition in a controlled machine learning competition, namely, the highly visible IJCNN 2011 traffic sign recognition contest in Silicon Valley (relevant for self-driving cars). The system was twice better than humans, and three times better than the nearest non-human competitor (co-authored by LeCun of LBH). It also broke several other machine learning records, and surely was not "forsaken" by the machine-learning community. In fact, the later system (Krizhevsky et al. 2012) was very similar to the earlier 2011 system. Here one must also mention that the first official international contests won with the help of ConvNets actually date back to 2009 (three TRECVID competitions) - compare Ji et al. (2013). A GPU-based max-pooling ConvNet committee also was the first deep learner to win a contest on visual object discovery in large images, namely, the ICPR 2012 Contest on Mitosis Detection in Breast Cancer Histological Images (Ciresan et al., 2013). A similar system was the first deep learning FNN to win a pure image segmentation contest (Ciresan et al., 2012a), namely, the ISBI 2012 Segmentation of Neuronal Structures in EM Stacks Challenge.

7. LBH discuss their FNN-based speech recognition successes in 2009 and 2012, but fail to mention that deep LSTM RNNs had outperformed traditional speech recognizers on certain tasks already in 2007 (Fernández et al., 2007) (and traditional connected handwriting recognisers by 2009), and that today's speech recognition conferences are dominated by (LSTM) RNNs, not by FNNs of 2009 etc. While LBH cite work co-authored by Hinton on LSTM RNNs with several LSTM layers, this approach was pioneered much earlier (e.g., Fernandez et al., 2007).

8. LBH mention recent proposals such as "memory networks" and the somewhat misnamed "Neural Turing Machines" (which do not have an unlimited number of memory cells like real Turing machines), but ignore very similar proposals of the early 1990s, on neural stack machines, fast weight networks, self-referential RNNs that can address and rapidly modify their own weights during runtime, etc (e.g., AMAmemory 2015). They write that "Neural Turing machines can be taught algorithms," as if this was something new, although LSTM RNNs were taught algorithms many years earlier, even entire learning algorithms (e.g., Hochreiter et al., 2001b).

9. In their outlook, LBH mention "RNNs that use reinforcement learning to decide where to look" but not that they were introduced a quarter-century ago (Schmidhuber & Huber, 1991). Compare the more recent Compressed NN Search for large attention-directing RNNs (Koutnik et al., 2013).

One more little quibble: While LBH suggest that "the earliest days of pattern recognition" date back to the 1950s, the cited methods are actually very similar to linear regressors of the early 1800s, by Gauss and Legendre. Gauss famously used such techniques to recognize predictive patterns in observations of the asteroid Ceres.

LBH may be backed by the best PR machines of the Western world (Google hired Hinton; Facebook hired LeCun). In the long run, however, historic scientific facts (as evident from the published record) will be stronger than any PR. There is a long tradition of insights into deep learning, and the community as a whole will benefit from appreciating the historical foundations.

The contents of this critique may be used (also verbatim) for educational and non-commercial purposes, including articles for Wikipedia and similar sites.

References not yet in the survey (Schmidhuber, 2015):

Y. LeCun, Y. Bengio, G. Hinton (2015). Deep Learning. Nature 521, 436-444. http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html

Y. LeCun (2015). IEEE Spectrum Interview by L. Gomes, Feb 2015: http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning

R. Dechter (1986). Learning while searching in constraint-satisfaction problems. University of California, Computer Science Department, Cognitive Systems Laboratory. First paper to introduce the term "Deep Learning" to Machine Learning.

I. Aizenberg, N.N. Aizenberg, and J. P.L. Vandewalle (2000). Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Springer Science & Business Media. First paper to introduce the term "Deep Learning" to Neural Networks. Compare a popular G+ post on this: https://plus.google.com/100849856540000067209/posts/7N6z251w2Wd?pid=6127540521703625346&oid=100849856540000067209.

J. Schmidhuber (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. Preprint: http://arxiv.org/abs/1404.7828

AMAmemory (2015): Answer at reddit AMA (Ask Me Anything) on "memory networks" etc (with references): http://www.reddit.com/r/MachineLearning/comments/2xcyrl/i_am_j%C3%BCrgen_schmidhuber_ama/cp0q12t


#machinelearning
#artificialintelligence
#computervision
#deeplearning

Link: http://people.idsia.ch/~juergen/deep-learning-conspiracy.html
25 comments on original post
1
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
 
Tesla Home Electrical Storage

This looks to be a pretty big deal, especially if Tesla can actual scale to the potential here. And the timing is perfect given the ever growing fragility of the power grid to both natural and manmade damage.

There are layers here but the biggest I see is not only being able to daily store energy for solar/wind and other sources but equally powerful is to be able to effectively leverage peak load pricing of the base grid so that you charge the system at night during low load (and low rates) and use your own power - and even potentially sell some power back to the grid during peak power. The rate differential is 4 times or more virtually paying for the cost of the system. Wow.

If this works as hoped it should be clean, silent, simple, free to low cost and make our local power grids far more stable and secure. Win win for everyone (but the power company).

Call me a fanboy of Musk (I am) but he is the Thomas Edison of our time - changing finance (Paypal), transportation (Tesla), space travel (SpaceX), Clean Energy (SolarCity) and now our base power grid itself (Tesla Home Power). Through in Hyperloop and other ideas and he is simply a great practical get-it-done engineer in the classic style.
Tesla will announce a new product at the end of April that's not a car, CEO Elon Musk recently announced without revealing any further details. Even so, most people are expecting the secret product...
11 comments on original post
2
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
More +AVIF Volunteering going ahead in the Amazon region from June
1
Mary Kariuki's profile photoAlison B. Lowndes's profile photo
2 comments
 
And you, Mumbi x
Add a comment...

Alison B. Lowndes

Shared publicly  - 
 
Not a theist? Okay so what are you then?
1
Add a comment...
People
Have her in circles
1,029 people
Medo Tatom's profile photo
Tessa Van Malhoup's profile photo
Ddd Eee's profile photo
Geoff Brown's profile photo
Dexter Martin's profile photo
Ajib Shrestha's profile photo
graphene oxo's profile photo
Brandon C. Meyer's profile photo
Gilbert Bor's profile photo
Work
Occupation
Deep Learning Solutions Architect & Community Manager NVIDIA
Skills
Astrophysics, maths, deep learning, coding, international synergiser, project management, social media Queen, anything internet, sports bikes, swimming, weightlifting, best friend, taking no shit!
Employment
  • Nvidia
    Deep Learning Solutions Architect, 2015 - present
  • AVIF
    Founder Trustee, 2006 - present
  • KO2 Adventures CIC
    Executive Ops Director, 2008 - 2012
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Harrogate, UK
Previously
Manchester, UK - Zibo, China - Osnabruck, Germany - Tampa Bay, Florida
Story
Tagline
Google'll ne'r be Master of all t'internet till it be allowin page translation into English (Pirate) Arrrrrr
Introduction
Deep Learning Solutions Architect for Nvidia & founder trustee of a global volunteering network AVIF.org.uk. You can generally find me on Twitter or Facebook (sorry Google).
@alisonblowndes
/alison.lowndes
Bragging rights
Astrophysics, played* rugby, yelled at the Chinese PSB, inlined marathons, swimmer and GSX-R750 lover but my best role is Mother of two!
Education
  • University of Leeds
    Artificial Intelligence, 2012 - 2015
  • University of Leeds
    Physics with Astrophysics, 2004 - 2005
Basic Information
Gender
Female
Looking for
Friends, Networking
Birthday
May 3
Alison B. Lowndes's +1's are the things they like, agree with, or want to recommend.
The BRCK at Rhino Charge 2015 | BRCK
www.brck.com

The BRCK goes off road and off grid again at the Rhino Charge in Kenya

A Religion for the Nonreligious | Wait But Why
waitbutwhy.com

Not a theist? Okay so what are you then?

BRCK+Pi | BRCK
www.brck.com

It's funny what causes one to get excited. For some it is the smell of something new. For others the satisfaction of helping someone out. Fo

UK must abandon or adapt in face of floods - environment - 19 February 2...
www.newscientist.com

The UK's future is wet. How can Britons learn to live with the water, and who will have to move to higher ground?

Bringing water to Nkiito
www.indiegogo.com

Nkiito is a beautiful serene maasai village in Amboseli, Kenya. They have no running water. We hope you'll visit and help.

Ben and Tarka Make History (Official Announcement)
scottexpedition.com

Ben Saunders and Tarka L'Herpiniere make history as they complete Captain Scott's iconic polar journey

Round One Hundred and Two (Day 102)
scottexpedition.com

Scott Expedition - Day 103: We've done 102 rounds with Antarctica and won every one of them

The White Hell (and Some Good News!) (Day 99)
scottexpedition.com

Scott Expedition - Day 99: A poignant day passing Scott's last camp

In the Hurt Box (Day 86)
scottexpedition.com

Scott Expedition - Day 86: In the Hurt Box

Humanity in jeopardy | KurzweilAI
www.kurzweilai.net

Watson vs. humans, January 13, 2011 (credit: IBM) Exactly three years ago, on January 13, 2011, we humans were dethroned by a computer on th

Two Months in Antarctica (and a Christmas Message from the ice) (Video)
scottexpedition.com

Christmas Day marks two months on the ice for Ben and Tarka. Here's a Christmas message from Ben in Antarctica and a snapshot into the first

Retracing Our Steps (Day 74)
scottexpedition.com

Scott Expedition - Day 74: Thinking about Scott and his men

travel2change - Project
travel2change.org

Together, we are on a mission to make travel better for travelers and locals. Use your passion to create change during your trip.

Doctor Who anniversary: 12 ways to become a Time Lord - science-in-socie...
www.newscientist.com

The world's longest-running science fiction show has its 50th anniversary on 23 November. We examine the ways that the fiction is rooted in

Incredibly relaxing and beautiful place, thank you for the pleasure.
Public - a year ago
reviewed a year ago
You suck ! .. and I'll be making a formal complaint. My 18 year old daughter just turned up to a family room booking with friends, part of a large party travelling from Harrogate to celebrate her 18th birthday and you turned her away. Apparently the room she'd booked had been trashed the night before which is no one's fault, definitely not hers but you not only didn't offer her another room you told her to drive 30 miles away to another Travelodge which was in terrible condition and would have cost them a fortune in a taxi home. This is her 18th birthday celebration and I then had to find them another hotel. I want a full refund, refund of the petrol cost to the other dismal place you sent her and also at least 100 compensation for the total inconvenience. This incident is about to go all over the internet too! Your customer service at Gateshead is disgraceful.
• • •
Public - a year ago
reviewed a year ago
3 reviews
Map
Map
Map