Profile cover photo
Profile photo
SwissCognitive WhyWait? - Act Now!
14 followers -
ACT NOW!
ACT NOW!

14 followers
About
SwissCognitive's posts

Post has attachment
SwissCognitive goes to Israel this summer. Get in touch with us if you're interested to connect.

Post has attachment
The Mystery of the Brain Examined

#AI #Autonomous_Car #Brain #Digital_Personal_Assistant #DL #Machine_Learning #News #Robot #Voice
The brain and how it learns may be among the most complicated puzzles in the quickly advancing field of neuroscience.
But Harvard is trying to unravel its mystery.
copyright by news.harvard.edu
The Ariadne Project, led by David Cox , an assistant professor of molecular and cellular biology and computer science at Harvard, mobilized a multi-university team of experts in neuroscience, physics, machine learning, and high-performance computing to explore the possibility of creating an artificial brain by reverse-engineering the brain of a rat while it learns. The aim is to build computer algorithms that replicate the way human brains perceive information and learn.
“This is where the field of computer science and neuroscience are not only exploding, but are merging on a collision course that is allowing us to explore the way we conventionally think of understanding,” Cox told 80 attendees at the Harvard Ed Portal’s Faculty Speaker Series lecture “ Toward an Artificial Brain ” in Allston.
“Walking across a room and not falling over is hard for a computer [robot], but easy for us.”
“Walking across a room and not falling over is hard for a computer [robot], but easy for us. Systems don’t quite understand the way we understand,” he said. “Let’s go back to the brain and find out what we’re missing.” Kenneth Blum , executive director of the Center for Brain Science (CBS) said in opening remarks that recent advances in artificial intelligence may indicate that intelligent machines are just around the corner. But how that might happen remains a question.
Moving from rats’ to human brains
“You have probably read about artificial intelligence in the news, seen it in movies, used Siri or Alexa or one of the other voice assistants … or know that soon in most cars to be sold there will be a little chip that will gather data to assist with self-driving cars,” he said. “But nobody thinks that simply massive computer power alone — which has driven all of these recent advances — will be enough to get us to magically understand real intelligence. That’s what we are going to hear from David tonight,” he said. Cox explained that by analyzing a rat’s brain activity, taking that brain apart, reconstructing the wiring, and mapping microscopic data, the Ariadne Project is paving the way on the artificial intelligence frontier.
read more – copyright by news.harvard.edu


Join us on Linkedin http://bit.ly/2ifCm0V
Join us on twitter http://bit.ly/2iuYYgA

Post has attachment
The Mind-Scrambling TED Talk You Should Share Everywhere

#AI #Brain #Cognitive #Robot
Consciousness in machines may have more to do with the ability to breath than intelligence.
copyright by www.wired.com
The talk didn’t even happen during a particularly well-attended session. Cognitive neuroscientist Anil Seth spoke right at the end of the “Mind and Meaning” series of talks Wednesday morning. And what he said spoke to me: Despite recent claims to the contrary, consciousness is uniquely intertwined with being human. All this handwringing about AI becoming conscious? About uploading your brain to a robot? You probably shouldn’t worry. As a human, I found this very comforting. “My research shows that consciousness has less to do with pure intelligence and more to do with our nature as living and breathing organisms,” Seth said. After all, you don’t have to be a super-intelligent AI to suffer. You have to be alive.
Two different kinds of Consciousness
There are two sides to consciousness, Seth says: the outside experience and the inner experience. Starting with the outside—the sights, sounds and smells surrounding us—Seth says our experience depends on our brain’s ability to function as a prediction engine. He played a garbled audio clip for the audience, something indiscernible—then a clearer version of the clip. “I think Brexit is a really terrible idea,” the recording said. He then played the same earlier, garbled audio. This time, it was completely intelligible.
Are we all hallucinating all the time?
“The sensory information coming into your brain hasn’t changed at all,” Seth said. But something had changed after we heard the clearer audio. Our brains had information to predict what the garbled audio would sound like. Judging from the faces of the people around me, I knew I wasn’t the only one feeling amazed. (Gobsmacking audience participation? Check.) My heart was engaged. Then came my head. “If hallucination is a kind of uncontrolled perception, then perception is a kind of hallucination,” Seth said. But, he said, it’s a controlled hallucination, one in which sensory information from the world is reining in the brain’s predictions. “In fact, we’re hallucinating all the time, including right now. It’s just that when we agree about our hallucinations. We call that reality.” Dang.
read more – copyright by www.wired.com
http://blog.ted.com/how-does-consciousness-happen-anil-seth-speaks-at-ted2017/  


Join us on Linkedin http://bit.ly/2ifCm0V
Join us on twitter http://bit.ly/2iuYYgA

Post has attachment
The Cognitive Bias President Trump Understands Better Than You

#Brain #Cognitive #News
Taking Advantage of Bias everyone of us has, is easier than you think
copyright by www.wired.com
Americans born in the United States are more murderous than undocumented immigrants. Fighting words, I know. But why? After all, that’s just what the numbers say.
The untold other side of every coin
Still, be honest: you wouldn’t linger over a story with that headline. It’s “dog bites man.” It’s the norm. And norms aren’t news. Instead, you’ll see two dozen reporters flock to a single burning trash can during an Inauguration protest. The aberrant occurrence is the story you’ll read and the picture you’ll see. It’s news because it’s new. The problem here is not just that this singling out creates a distorted, fish-eye lens version of what’s really happening. It’s that the human psyche is predisposed to take an aberration—what linguist George Lakoff has called the “salient exemplar”—and conflate it with the norm. This cognitive bias itself isn’t new. But in a media environment driven by clicks, where politicians can bypass journalistic filters entirely to deliver themselves straight to citizens, it’s newly exploitable.
The science behind Cognitive Bias
Lakoff, a University of California, Berkeley linguist and well-known Democratic activist, cites Ronald Reagan’s “welfare queen” as the signature “salient exemplar.” Reagan’s straw woman—a minority mother who uses her government money on fancy bling rather than on food for her family—became an effective rhetorical bludgeon to curb public assistance programs even though the vast majority of recipients didn’t abuse the system in that way. The image became iconic, even though it was the exception rather than the rule. Psychologists call this bias the “availability heuristic,” an effect Trump has sought to exploit since the launch of his presidential campaign, when he referred to undocumented Mexican immigrants as rapists.
“It basically works the way memory works: you judge the frequency, the probability, of something based on how easily you can bring it to mind,” says Northeastern University psychologist John Coley. “Creating a vivid, salient image like that is a great way to make it memorable.”
This is the same bias that makes you fear swimming in the ocean lest you get attacked by a shark, despite shark attacks being far less common than, say, death by coconut. When something is memorable, it tends to be the thing you think of first, and then it has an outsize influence on your understanding of the world. After the movie Jaws came out, a generation of people was afraid to swim in the sea—not because shark attacks were more likely but because all those movie viewers could more readily imagine them.
Cognitive Malfunction or not?
Psychologists stress that your brain has to work this way, to a certain extent—otherwise you’d have a very hard time differentiating and prioritizing the avalanche of inputs you receive throughout your life. “It’s not a cognitive malfunction,” says Coley. “But it can be purposefully exploited.” When Trump uses a salient exemplar that will lodge in your brain, he’s manipulating your brain’s natural way of sorting information.
read more – copyright by www.wired.com


Join us on Linkedin http://bit.ly/2ifCm0V
Join us on twitter http://bit.ly/2iuYYgA

Post has attachment
DropBox uses Deep Learning to OCR your Documents

#AI #Bot #DL #Machine_Learning #News #OCR #Recognition #Trend
The need for OCR in Document Management
copyright by blogs.dropbox.com
This post will take you behind the scenes on how DropBox built a state-of-the-art Optical Character Recognition (OCR) pipeline for their mobile document scanner. They used computer vision and deep learning advances such as bi-directional Long Short Term Memory (LSTMs), Connectionist Temporal Classification (CTC), convolutional neural nets (CNNs), and more. The document scanner makes it possible to use the mobile phone to take photos and “scan” items like receipts and invoices. However, the mobile document scanner only outputs an image — any text in the image is just a set of pixels as far as the computer is concerned, and can’t be copy-pasted, searched for, or any of the other things you can do with text. Hence the need to apply Optical Character Recognition, or OCR. This process extracts actual text from our doc-scanned.
Ready-made vs. building own pipeline
When DropBox built the first version of the mobile document scanner, they used a commercial off-the-shelf OCR library, in order to do product validation before diving too deep into creating their own machine learning-based OCR system. This meant integrating the commercial system into the scanning pipeline, offering both features above to business users to see if they found sufficient use from the OCR. Once it was confirmed that there was indeed strong user demand for the mobile document scanner and OCR, they decided to build their own in-house OCR system for several reasons. First, there was a cost consideration: having their own OCR system would save them significant money as the licensed commercial OCR SDK charges based on the number of scans. Second, the commercial system was tuned for the traditional OCR world of images from flat bed scanners, whereas this operating scenario was much tougher, because mobile phone photos are far more unconstrained, with crinkled or curved documents, shadows and uneven lighting, blurriness and reflective highlights, etc.
Building the OCR system using AI
In fact, a sea change has happened in the world of computer vision that gave us a unique opportunity. Traditionally, OCR systems were heavily pipelined, with hand-built and highly-tuned modules taking advantage of all kinds of conditions they could assume to be true for images captured using a flatbed scanner. The process to build these OCR systems was very specialized and labor intensive, and the systems could generally only work with fairly constrained imagery from flat bed scanners. The last few years has seen the successful application of deep learning to numerous problems in computer vision that have given us powerful new tools for tackling OCR without having to replicate the complex processing pipelines of the past, relying instead on large quantities of data to have the system automatically learn how to do many of the previously manually-designed steps. Perhaps the most important reason for building their own system is that it would give DropBox more control over their own destiny, and allow them to work on more innovative features in the future […]
read more – copyright by blogs.dropbox.com


Join us on Linkedin http://bit.ly/2ifCm0V
Join us on twitter http://bit.ly/2iuYYgA

Post has attachment
Why eCommerce Needs Artificial Intelligence To Survive

#AI #Bot #Brain #Digital_Personal_Assistant #Jobs #News #Retail #Robot #Voice
Artificial Intelligence has finally emerged from the shadowy realm of university labs and science fiction to splash down right in the heart of mainstream society. This seemingly rapid ascent has been bolstered by an unlikely (even pedestrian) bedfellow: online shopping.
copyright by boomtrain.com
Artificial Intelligence can go deep. Like, really deep. Its networks of software and hardware essentially mimic the neurons in our brains, analyzing huge swathes of digital data and drawing insights from the patterns they recognize. They adopt tasks, such as identifying pictures and recognizing voice commands, and then they leverage their biggest advantages—speed and volume—to quickly master these tasks, even outperforming humans in some cases. As complex as AI can be, it can also be applied in service of simpler, more focused goals, such as making it easier for your customers find relevant products in your online store.
If you’re an eCommerce provider, your business depends on your ability to connect users to the products they want. This is a big challenge in a world where product offerings and consumer behaviors are both moving targets. But it turns out, hitting those moving targets is something AI excels at. There has never been a better time to embrace your new robot overlords and make them your partners in creating a more user-friendly and more profitable venture.
1. Refined Search Capabilities
One business area that is ripe for refinement and reinvention is your website’s search capabilities. Previously, websites have relied on static (human-guided) algorithms that did a poor job of adapting to new content and shifting user behaviors, and as a result, they did a mediocre job of narrowing down the user’s search queries.
2. Analyzing Big Data
Why is this crucial to the survival of your eCommerce store? Big data gives you an insight into consumer trends, helping you to provide a solution to a problem that your customers may not even know they have. You will also be able to determine product attributes in real-time and then determine which ones need to be discounted or replenished and which ones are likely to be relevant to a specific search query.
3. Personalized Online Experiences
AI can provide intelligent engagement at every single customer touch point, something that would require way too much time for a human workforce. It can do this by identifying clusters and patterns in information, such as similarities between customers, past purchasing behavior, browsing history and other common threads. With this information, your eCommerce store will be able to offer proactive guidance, such as providing your customers with a personal shopping assistant and customized the sales experience based on their behavior in real-time, while they are on your site.
4. Product Cataloging
If you can pair a beautiful website with highly efficient product cataloging, then you will really make some happy customers. AI can play a vitally important role in product cataloging. Customers today expect more insightful and accurate product information from their retailers, and if you don’t meet their expectations, you could find yourself losing out to a competitor. 30% of American adults say they would purchase goods from an online retailer they’ve never even heard of before if they display detailed and accurate product info.
read more – copyright by boomtrain.com


Join us on Linkedin http://bit.ly/2ifCm0V
Join us on twitter http://bit.ly/2iuYYgA

Post has attachment
Nvidia selects 5 most-disruptive AI startups

#AI #Chatbot #CTO #DL #Game #Jobs #Machine_Learning #News #Technology #Video
Nvidia
copyright by venturebeat.com
Nvidia is on a quest to find the most disruptive artificial intelligence startups. This quest is part of a larger contest dubbed Nvidia Inception, which is screening more than 600 entrants to cull the best AI startups in three big categories. Jen-Hsun Huang, CEO of Nvidia, hosted a Shark Tank- style event this week as part of the search to find the best AI startups. Huang and a panel of judges listened to pitches from 14 AI startups across three categories. These were filtered from the more than 600 contestants who entered the Nvidia Inception contest. The winners will walk away with $1.5 million in cash at a dinner on May 10 at Nvidia’s GPU Technology Conference.
Smartvid.io
The idea is to make construction safer by using AI on jobs that safety experts can’t physically get to in a day, said Josh Kanner, CEO of Smartvid.io. Construction generates more than $10 trillion a year in revenues, but accidents lead to more than 1,000 deaths a year.  But Smartvid.io figured out that photo and video data is often wasted, and Smartvid.io is unlocking the value of that content to analyze it for safety information. Smartvid.io automates the process of importing video into its app. Then the company’s safety AI analyzes the data for safety issues. All of the images are made searchable, and safety managers can make use of them. The system has integrated work flows and alerts. Workers get a suggestion from Smartvid.io, and they can rate that suggestion, which feeds back into the AI model.
Deep Instinct
Tel Aviv-based company is applying AI to the task of detecting malware. About 1 million new variants of malware are spread every day. Often, a new family of malware is only about 30 percent different from the code of something that came before. Many antivirus vendors focus on detecting known malware in a library, using reactive technology. But Deep Instinct believes the better solution is deep learning, which can be used to detect unknown malware in real time. It doesn’t detect virus signatures, sandboxing of content, or heuristics. Instead, it only looks at the binary raw details of the file in question. And Deep Instinct doesn’t require frequent updates, said Eli David, chief technology officer at Deep Instinct. It trains the deep learning neural network on hundreds of millions of files. In short, it focuses on prevention, not reaction.
Cape Analytics
Home insurance companies have to collect data on more than 100 million homes in the U.S. That isn’t easy, and Cape Analytics is using geospatial imagery, computer vision, and machine learning to help. Cape Analytics can collect aerial images that reveal a lot about a home, if they are properly analyzed, said Busy Cummings, vice president of sales. The initial focus is on the property insurance industry, but many others, from tax assessors to inspectors, also need a lot more data about homes. Home owners are often unreliable sources of information about their own homes, partly because they many not know the answers and partly because they may understand how to game the system to get lower insurance rates, Cummings said. Public records are also a frequently used source of data, but they are often outdated. And using inspectors is costly and often time-consuming.
Knoux
The Munich, Germany-based company is building the software systems needed to measure how the railroads are being used and gain insights into that data, said Vlad Lata, chief technology officer at Konux. The data on railways isn’t very reliable, so Konux makes a sensor that can be placed on concrete labs that hold tracks in place. One of Konux’s customers is Deutsche Bahn, the German train company, which has more than 70,000 switches on its tracks throughout Europe. Those switches require about five manual inspections a year, each requiring about three people for a minimum two-hour inspection. Deutsche Bahn spends about $9,100 per switch, and that comes to about $630 million a year.
Digital Genius
Digital Genius brings deep learning and AI to customer service operations. All of us spend about 43 days of our lives on the phone with customer service. It’s a $350 billion a year industry, and $300 billion of that cost lies in the salaries of people who are handling the calls. Digital Genius started out by creating a chatbot that follows a rule-based system to help offload customer service. But the company found that chatbots have limited usefulness. Then Digital Genius shifted more of its focus to deep-learning algorithms. “We built chatbots before they were cool, learned lessons of where they don’t work, and now we can solve problems much more quickly,” said Mikhail Naumov, cofounder of Digital Genius. […]
read more – copyright by venturebeat.com


Join us on Linkedin http://bit.ly/2ifCm0V
Join us on twitter http://bit.ly/2iuYYgA

Post has attachment
Why artificial intelligence needs the human touch

#AI #Chatbot #Education #Healthcare #Jobs #Microsoft #News #Recognition #Retail #Technology
Forget the scare stories:
AI is produced by engineers and its creators have the capability to design it in ways that enhance, rather than hinder, peoples’ lives
copyright by news.microsoft.com
“I propose to consider the question, ‘can machines think?’ This should begin with definitions of the meaning of the word ‘machine’ and ‘think’.”
So wrote computing pioneer Alan Turing in the introduction to his seminal paper on machine intelligence, published in the journal Mind in 1950.
That paper – introducing the ‘imitation game’, Turing’s adversarial test for machine intelligence, in which a human must decide whether what we now call a chatbot is a human or a computer – helped spark the field of research which later became more widely known as artificial intelligence.
Whilst no researcher has yet made a general purpose thinking machine – what’s known as artificial general intelligence – that passes Turing’s test, a wide variety of special purpose AIs have been created to focus on, and solve, very specific problems, such as image and speech recognition, and defeating chess and Go champions.
However, whenever AI hits trouble – such as when prototype autonomous cars cause accidents, robots look like eliminating jobs, or AI algorithms access personal data without permission, the news media surfaces major concerns about a societal downside to AI.
One trope in such stories is that AI is a hard-to-harness technology, one that could run away from human control at any time. But the truth is far more nuanced. At Microsoft, the aim is to use AI as a tool just like any other – one that’s used by engineers to achieve an end that strongly benefits people in one way or another, whether they are at home, or at work in fields as diverse as education, healthcare, aerospace, manufacturing or retail.

“We are trying to teach machines to learn so that they can do things that humans currently do, but in turn they should help people by augmenting their experiences,” says Microsoft CEO Satya Nadella.

read more – copyright by news.microsoft.com


Join us on Linkedin http://bit.ly/2ifCm0V
Join us on twitter http://bit.ly/2iuYYgA

Post has attachment
IoT On Pace To Replace Mobile Phones As Most Connected Device In 2018

#IoT #News #Technology #Trend
This IoT sensors and devices are expected to exceed mobile phones as the largest category of connected devices in 2018,
growing at a 23% compound annual growth rate (CAGR) from 2015 to 2021.
copyright by www.forbes.com

Internet of Things (IoT) sensors and devices are expected to exceed mobile phones as the largest category of connected devices in 2018, growing at a 23% compound annual growth rate (CAGR) from 2015 to 2021.


By 2021 there will be 9B mobile subscriptions, 7.7B mobile broadband subscriptions, and 6.3B smartphone subscriptions.


Worldwide smartphone subscriptions will grow at a 10.6% CAGR from 2015 to 2012 with Asia /Pacific (APAC) gaining 1.7B new subscribers alone.

These and other insights are from the 2016 Ericcson Mobility Report (PDF, no opt-in). Ericcson has provided a summary of the findings and a series of interactive graphics here . Ericcson created the subscription and traffic forecast baseline this analysis is based on using historical data from a variety of internal and external sources. Ericcson also validated trending analysis through the use of their planning models. Future development is estimated based on macroeconomic trends, user trends (researched by Ericsson ConsumerLab), market maturity, technology development expectations and documents such as industry analyst reports, on a national or regional level, together with internal assumptions and analysis.In addition, Ericsson regularly performs traffic measurements in over 100 live networks in all major regions of the world. For additional details on the methodology, please see page 30 of the study.
Key takeaways from the 2016 Ericcson Mobility Report include the following: Internet of Things (IoT) sensors and devices are expected to exceed mobile phones as the largest category of connected devices in 2018, growing at a 23% compound annual growth rate (CAGR) from 2015 to 2021. Ericcson predicts there will be a […]
read more – copyright by www.forbes.com


Join us on Linkedin http://bit.ly/2ifCm0V
Join us on twitter http://bit.ly/2iuYYgA

Post has attachment
IoT On Pace To Replace Mobile Phones As Most Connected Device In 2018

#IoT #News #Technology #Trend
This IoT sensors and devices are expected to exceed mobile phones as the largest category of connected devices in 2018,
growing at a 23% compound annual growth rate (CAGR) from 2015 to 2021.
copyright by www.forbes.com

Internet of Things (IoT) sensors and devices are expected to exceed mobile phones as the largest category of connected devices in 2018, growing at a 23% compound annual growth rate (CAGR) from 2015 to 2021.


By 2021 there will be 9B mobile subscriptions, 7.7B mobile broadband subscriptions, and 6.3B smartphone subscriptions.


Worldwide smartphone subscriptions will grow at a 10.6% CAGR from 2015 to 2012 with Asia /Pacific (APAC) gaining 1.7B new subscribers alone.

These and other insights are from the 2016 Ericcson Mobility Report (PDF, no opt-in). Ericcson has provided a summary of the findings and a series of interactive graphics here . Ericcson created the subscription and traffic forecast baseline this analysis is based on using historical data from a variety of internal and external sources. Ericcson also validated trending analysis through the use of their planning models. Future development is estimated based on macroeconomic trends, user trends (researched by Ericsson ConsumerLab), market maturity, technology development expectations and documents such as industry analyst reports, on a national or regional level, together with internal assumptions and analysis.In addition, Ericsson regularly performs traffic measurements in over 100 live networks in all major regions of the world. For additional details on the methodology, please see page 30 of the study.
Key takeaways from the 2016 Ericcson Mobility Report include the following: Internet of Things (IoT) sensors and devices are expected to exceed mobile phones as the largest category of connected devices in 2018, growing at a 23% compound annual growth rate (CAGR) from 2015 to 2021. Ericcson predicts there will be a […]
read more – copyright by www.forbes.com


Join us on Linkedin http://bit.ly/2ifCm0V
Join us on twitter http://bit.ly/2iuYYgA
Wait while more posts are being loaded