Profile

Cover photo
Daniel Estrada
Lives in Internet
30,624 followers|10,855,174 views
AboutPostsCollectionsPhotosYouTube+1's

Stream

Daniel Estrada

Shared publicly  - 
 
 
Would you be more willing to talk to Google Assistant if you knew about its childhood?
Google Assistant is much more bot-like than Alexa and Siri, though it can do more powerful stuff. Google wants to make the Assistant seem more human. They've turned to artists from the Google Doodle team and a freelance artist from Pixar to solve the problem.
5 comments on original post
1
Tom Nathe's profile photo
 
"It was never easy for me. I was born a poor black child..."

- Steve Martin, "The Jerk"
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Let us read what we paid for

Imagine a business like this: you get highly trained experts to give you their research for free... and then you sell it back to them.  Of course these experts need equipment, and they need to earn a living... so you get taxpayers to foot the bill.  

And if the taxpayers want to actually read the papers they paid for?   Then you charge them a big fee!

It's not surprising that with this business model, big publishers are getting rich while libraries go broke.  Reed-Elsevier has a 37% profit margin!

But people are starting to fight back — from governments to energetic students like ‎Alexandra Elbakyan here.

On Friday, the Competitiveness Council —a gathering of European ministers of science, innovation, trade, and industry—said that all publicly funded scientific papers published in Europe should be made free to access by 2020

This will start a big fight, and it may take longer than 2020.   But Alexandra Elbakyan isn't waiting around.

In 2011, as a computer science grad student in Kazakhstan, she got sick of paying big fees to read science papers.  She set up SciHub, a pirate website that steals papers from the publishers and sets them free.

SciHub now has 51,000,000 papers in its database.  In October 2015, Elsevier sued them.  In November, their domain name was shut down.  But they popped up somewhere else.  By February, people were downloading 200,000 papers per day.   Even scientists with paid access to the publisher's databases are starting to use SciHub, because it's easier to use.

Clearly piracy is the not the ultimate solution. Elbakyan now lives in an undisclosed location, to avoid being extradited.  But she gave the world a much-needed kick in the butt.   The old business model of get smart people to work for free and sell the product back to them is on its way out.

For more, read:

John Bohannon, Who's downloading pirated papers? Everyone, Science, 28 April 2016, http://www.sciencemag.org/news/2016/04/whos-downloading-pirated-papers-everyone

and especially the SciHub Twitter feed:

https://twitter.com/Sci_Hub

Also read this:

Martin Enserink, In dramatic statement, European leaders call for ‘immediate’ open access to all scientific papers by 2020, Science,
27 May 2016, http://www.sciencemag.org/news/2016/05/dramatic-statement-european-leaders-call-immediate-open-access-all-scientific-papers

The Dutch government is really pushing this!  Congratulations to them!

#openaccess  
102 comments on original post
12
Chris Korman's profile photo
 
It is unfortunate that a paywall sits between most research. Open access certainly benefits more people. The Journals will lose their revenue stream, I am unsure if or how it might effect their peer review process. Perhaps detach peer review entirely from Journals, but this may be a hindrance if researchers don't trust submitting to an unknown portal, especially in a highly competitive field of research.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Motion AI lets anyone easily build a bot

Motion AI, a Chicago company that lets anyone easily build a bot without touching a line of code, has announced it is open to business after several months of private testing. What, another bot-builder? Dozens of other startups have launched to build bots for developers, especially after Facebook kicked off a bot craze last month that now has tens of thousands of developers building bots on Facebook Messenger alone. But Motion AI stands out because it hand-holds you through building every aspect of the bot’s flow, including deployment across most of the bot platforms (Facebook, Slack, SMS, email, Web and so on). Moreover, it has created what it calls bot “modules,” which package up the logic required for building particular bot features. This saves novices — and even experienced developers — multiple steps. These modules will soon be featured in a store (to be launched in about a month) where customers can take whatever module they need as they put together their bots, according to founder and chief executive David Nelson.
Motion AI, a Chicago company that lets anyone easily build a bot without touching a line of code, has announced it is open to business after several months of private testing.
1 comment on original post
11
5
Kenneth Wong's profile photoSowmyan Tirumurti's profile photo
2 comments
 
So just as we have an IVR system responding to us for voice calls, we will now have a chat-bot responding to a chat session! Companies think machines are better than humans in responding to other humans. Mmm... what are the other ways we can disrupt human to human interaction? 
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> Practopoiesis is a theory on how life organizes, including the organization of a mind. It proposes the principles by which adaptive systems function. One the same theory covers the life and the mind. It is a general theory of what it takes to be biologically intelligent. Being general, the theory is applicable to the brain as much as it is applicable to artificial intelligence (AI) technologies (see AI-Kindergarten.). What makes the theory so general is that it is grounded in the principles of cybernetics, rather than describing the physiological implementations of those mechanisms (inhibition/excitation, plasticity, etc.).

The most important presumption about the brain that practopoietic theory challenges is the generally accepted idea that the dynamics of a neural network (with its excitatory and inhibitory mechanisms) is sufficient to implement a mind. Practopoiesis tells us that this is not enough. Something is missing. Practopoiesis also offers answers on what is missing, both theoretically and in a form of a concrete implementation. The theoretical answer is in T3-systems and the processes of anapoiesis. The concrete implementation in the brain is based on the neural adaptation mechanisms. These mechanisms enable us to adaptively deal with the context within which we have to operate and thus, to be intelligent.

The main contribution to the mind-body problem: Practopoiesis suggests that we should think about mind differently from how we are used to. According to T3-theory, the mind cannot be implemented by a classical computation, which consists of symbol manipulation within a closed system (a “boxed” computation machine). Rather, a mind i.e., a thought, is a process of an adaptation to organism’s environment. This requires the “computation” system to be open and to interact with the environment while a thought or a percept is evolving. The reason why we are conscious and machines are not, is that our minds are interacting with the surrounding world while undergoing the process of thought, and machines are not — machines recode inputs into symbols and then juggle the symbols internally, devoid of any further influences from the outside.

More: http://www.danko-nikolic.com/practopoiesis/
via +Jon Lawhead

// This view seems promising. I think it lays on the anti-representational arguments too strong, perhaps because I've been defending representationalism recently. I think the right view will be one that describes a cybernetic control system with robust representational resources at its disposal. I think it's correct to say that the brain isn't fundamentally a representational system. The nervous system is fundamentally a system for coordinating action. But in so coordinating, it really can juggle "internal" representations around and inspect them. In Kinds of Minds, Dennett calls these the Popperian creatures after Karl Popper, who said that such thinking "permits our hypotheses to die in our stead."

My instructor from grad school Dr. Brewer once gave the following argument for internal representations, which I still find completely convincing. Here's the challenge: close your eyes and tell me how many windows are in your parent's living room. It's unlikely that you've thought of this question explicitly; if you can generate an answer, it's likely that you're conjuring an internal model of the room, and counting the windows on that model. That's exactly a case of "juggling symbols internally". Our capacity to reason about such mental models was the subject of my advisor's 2006 book, Models and Cognition:

https://drive.google.com/file/d/0B4me4PbBMBmOVXM3NjJ3aGljX1U/view?usp=sharing

But my sense is that practopoeisis can deal with such cases fairly comfortably. Still, I want to deal with the last claim in the quote above, about the differences between us and machines. He's right, in some sense, that neurons are sensitive to much more of the nearby activity than electrical circuits. But it's worth saying that quite a lot of the technical challenge in building microchips is in keeping the signals clear and distinct. The reason microchips can process at such high speeds is because we can keep these signals reliably clear at very small scales. In other words, this isn't an inevitably feature of our machines; it has been an explicit design goal.

From this perspective, it's worth mentioning that brains also do quite a lot of work insulating the signals from surrounding neurons. There are lots of neurons passing through any given space (see: http://goo.gl/jr2dHA), but only a few neurons are actually talking to each other. The rest are insulated from the signal by other types of neural cells. In other words, neural signals aren't entirely open to influence from the outside. But certainly they are tolerant of more influence than a microchip.

The other place to object, though, is the extent to which computation happens indpenedent of outside influence. When I'm playing a video game, for instance, there's certainly a lot of processing happening "under the hood" of the machine, but there's also a lot of sensitivity to my behavior and interaction with that system, and so there's a lot of interdependence and interaction between the player and the computer. At a very low level the computation isn't interactive, but at the level of the game itself, the machine is interactive nearly to the point of immersion. And even that's not strictly true! Modern video cards will only render from the perspective of the player, which means that in a very direct way, the computations performed are tightly linked to what the player is doing, with very rapid response times. To describe such machines as "devoid of influence from the outside" seems strange from this perspective.

What a lot of the criticisms of AI I've been ranting against fail to appreciate is the basic issue of multiple realizability, that the same high level process might infact be constituted by many different kinds of low level processes, each of which produces a functional analog at the higher level by different means. There's nothing about silicon that fundamentally prevents it from being interactive in the appropriate ways. The myth that biology is fundamentally "alive" while electronics are "inert" must be resisted wherever it appears.

But I'm ranting, and none of these issues seem like big problems for practopoiesis; in fact, I'd expect the author to agree with most of what I've said. Some of these articles seem rather new, within the last year. I'll be interested to see what the scholarly response is. 
Practopoiesis is a theory on how life organizes, including the organization of a mind. It proposes the principles by which adaptive systems function. One the same theory covers the life and the mind. It is a general theory of what it takes to be biologically intelligent.
13
2
Abe Pectol's profile photoDanko Nikolic's profile photoDaniel Estrada's profile photo
8 comments
 
+Danko Nikolic Thank you for stimulating our thoughts!
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> Our humanoid robot, the iCub (I as in “I robot”, Cub as in the man-cub from Kipling’s Jungle Book), has been specifically designed to support research in embodied artificial intelligence (AI). At 104 cm tall, the iCub has the size of a five-year-old child. It can crawl on all fours, walk and sit up to manipulate objects. Its hands have been designed to support sophisticate manipulation skills. The iCub is distributed as Open Source following the GPL/LGPL licenses and can now count on a worldwide community of enthusiastic developers. More than 30 robots have been built so far which are available in laboratories in Europe, US, Korea and Japan (see http://www.iCub.org). It is one of the few platforms in the world with a sensitive full-body skin to deal with safe physical interaction with the environment.

https://www.youtube.com/watch?v=pNIvdmJUlVE
via +Boing Boing http://boingboing.net/2016/05/23/gaze-controller-for-humanoid-r.html
10
2
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> Q: Do you dare predict a timeline for that?
A: More than five years. I refuse to say anything beyond five years because I don’t think we can see much beyond five years.
...

Q: In the ’80s, scientists in the AI field dismissed deep learning and neural networks. What changed?
A: Mainly the fact that it worked. At the time, it didn’t solve big practical AI problems, it didn’t replace the existing technology. But in 2009, in Toronto, we developed a neural network for speech recognition that was slightly better than the existing technology, and that was important, because the existing technology had 30 years of a lot of people making it work very well, and a couple grad students in my lab developed something better in a few months. It became obvious to the smart people at that point that this technology was going to wipe out the existing one.

Google was then the first to use their engineering to get it into their products and in 2012, it came out in the Android, and made the speech recognition in the Android work much better than before: It reduced the word-error rate to about 26 per cent. Then, in 2012, students in my lab took that technology that had been developed by other people, and developed even further, and while the existing technology was getting 26 per cent errors, and we got 16 per cent errors. In the years after we did that, people said, ‘Wow, this really works.’ They were very skeptical for many many years, they published papers dismissing it. Over the next years, they all switched to it.


// For anyone confused by +Luciano Floridi's article from last week (https://goo.gl/Q3OU7Y):

There's an important distinction between the proponents of AI and the Singularitarians, and +Geoffrey Hinton does an excellent job here of representing this space. For Hinton, there's no question of a machine's thinking (or believing, deciding, imagining, etc; see his classic 2007 Google talk, esp. ~24:00 https://goo.gl/qdZviJ). We really can build computers that do all those things to demonstrable effect. And yet Hinton's optimism about AI doesn't preclude any realism about the gap between computers and humans: he says brains operate at roughly a million times the capacity of our best artificial neural nets today, with a hundred thousand times less power consumption.

Those are big numbers, for sure, but the underlying point is crystal clear: that these are differences of scale which pose an engineering challenge, not ones of essence that pose an affront to logical necessity. And given that we've experienced a trillion-fold increase in computing power over the last 60 years (http://goo.gl/mLMZ4p), it's hard to interpret these numbers as impossible to overcome. It probably won't happen in the next five years, and as Hinton rightly notes, who knows what will happen beyond that. The future success of AI is not fated to happen. But we can still heap criticism on those who claim, as Floridi does, that "No conscious, intelligent entity is going to emerge from a Turing Machine." Such views have no place in contemporary discussions of AI.

I quoted a piece in the interview crediting the success of commercial AI applications as part of what brought AI out of its winter. It is interesting to think about the socioeconomic viability of AI as critical to its development. It suggests that if AI does not close these large gaps in this iteration, there will be not just technical but also socioeconomic reasons for the failure. The first AI winter was also due to a lack of funding, but that was for academic research and not for smartphone apps that everyone uses.

https://en.wikipedia.org/wiki/AI_winter
 
The meaning of AlphaGo, the AI program that beat a Go champ

Geoffrey Hinton, the godfather of ‘deep learning’—which helped Google’s AlphaGo beat a grandmaster—on the past, present and future of AI

Q: Beyond games, then—what might come next for AI?
A: It depends who you talk to. My belief is that we’re not going to get human-level abilities until we have systems that have the same number of parameters in them as the brain. So in the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses—10 to the 15, it’s a very big number. So that’s quite unlike the neural networks we have right now. They’re far, far smaller, the biggest ones we have right now have about a billion synapses. That’s about a million times smaller than the brain.

Q: Can the growth in computing continue, to allow applications of deep learning to keep expanding?
A: For the last 20 years, we’ve had exponential growth, and for the last 20 years, people have said it can’t continue. It just continues. But there are other considerations we haven’t thought of before. If you look at AlphaGo, I’m not sure of the fine details of the amount of power it was using, but I wouldn’t be surprised if it was using hundreds of kilowatts of power to do the computation. Lee Sedong was probably using about 30 watts, that’s about what the brain takes, it’s comparable to a light bulb. So hardware will be crucial to making much bigger neural networks, and it’s my guess we’ll need much bigger neural networks to get high-quality common sense.

Geoffrey Hinton, the godfather of ‘deep learning’—which helped Google’s AlphaGo beat a grandmaster—on the past, present and future of AI
7 comments on original post
16
4
Daniel Estrada's profile photoDeen Abiola's profile photoJohn Lewis's profile photo
11 comments
 
+Deen Abiola Agreed. However one would think, erroneously, that a person could dedicate more than one millionth of their own neurons to the study and analysis of a single subject without obstruction.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> A robot needs to be able to detect and classify unforeseen physical states and disturbances, rate the potential damage they may cause to it, and initiate appropriate countermeasures, i.e., reflexes. In order to tackle this demanding requirement, the human antetype shall serve as our inspiration, meaning that human pain-reflex movements are used for designing according robot pain sensation models and reaction controls. We focus on the formalization of robot pain, based on insights from human pain research, as an interpretation of tactile sensation.

https://www.youtube.com/watch?v=3M75f4D9pwg
More: http://spectrum.ieee.org/automaton/robotics/robotics-software/researchers-teaching-robots-to-feel-and-react-to-pain
via +Wayne Radinsky
1
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
"Self Racing Cars is a new race series started by technology entrepreneur Joshua Schachter as a way for companies and hobbyists to test their autonomous vehicles and learn from each other. There are no rules, and there is no qualifying -- anyone with a autonomous car or autonomous vehicle technology can apply to participate at the events currently being held at Thunderhill Raceway in Willows, Calif. That means that even if it's just a Go-Kart, as long as it doesn't have a driver you can race it on the track."
If Roborace will be the Formula 1 of autonomous electric car racing, then "Self Racing Cars" is the Sports Car Club of America (SCCA). At least, that's the plan for the new driverless car series holding its first "track days" this weekend.
3 comments on original post
3
Kevin Kelly's profile photo
 
I was there! Sadly there were no races, officially. Only test runs.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Ray Kurzweil is building a chatbot for Google

It's based on a novel he wrote, and will be released later this year. He said that anyone will be able to create their own unique chatbot by feeding it a large sample of your writing, for example by letting it ingest your blog. This would allow the bot to adopt your "style, personality, and ideas."
Inventor Ray Kurzweil made his name as a pioneer in technology that helped machines understand human language, both written and spoken. These days he is probably best known as a prophet of The...
1 comment on original post
17
6
Add a comment...

Daniel Estrada

Shared publicly  - 
13
1
Bill Trowbridge's profile photoWerner Van Belle's profile photoDaniel Estrada's profile photoJon Lawhead's profile photo
18 comments
 
+Daniel Estrada "I didn't start off being an ass."

And so another autobiography chapter title is born. 
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Robotic systems that adapt and learn, and robots with knives, what could possibly go wrong?

Imagine for a moment the following commands.
1) take this knife.
2) chop all these "5" cookies into little pieces.
3) complete this task in as short a period of time as you can.

Enter the lab assistant, who takes one of the cookies and eats it ... those commands and that action are a combo waiting to go seriously wrong.

To misquote someone else, "I am not afraid of smart AI, I am afraid of the really stupid ones".

Herb Mugface - YouTube Channel (Herb is the robot below)
https://www.youtube.com/channel/UCv0BqZMqV5xNa5JOkibxOpw
"We never taught it to do that," says one researcher.
7 comments on original post
8
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> The appeal of risk scores is obvious: The United States locks up far more people than any other country, a disproportionate number of them black. For more than two centuries, the key decisions in the legal process, from pretrial release to sentencing to parole, have been in the hands of human beings guided by their instincts and personal biases.

If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long. The trick, of course, is to make sure the computer gets it right. If it’s wrong in one direction, a dangerous criminal could go free. If it’s wrong in another direction, it could result in someone unfairly receiving a harsher sentence or waiting longer for parole than is appropriate.

The first time Paul Zilly heard of his score — and realized how much was riding on it — was during his sentencing hearing on Feb. 15, 2013, in court in Barron County, Wisconsin. Zilly had been convicted of stealing a push lawnmower and some tools. The prosecutor recommended a year in county jail and follow-up supervision that could help Zilly with “staying on the right path.” His lawyer agreed to a plea deal.

But Judge James Babler had seen Zilly’s scores. Northpointe’s software had rated Zilly as a high risk for future violent crime and a medium risk for general recidivism. “When I look at the risk assessment,” Babler said in court, “it is about as bad as it could be.”

Then Babler overturned the plea deal that had been agreed on by the prosecution and defense and imposed two years in state prison and three years of supervision.

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
via Randall Villarreal



There’s software used across the country to predict future criminals. And it’s biased against blacks.
4
1
James Salsman's profile photoGouthum Karadi's profile photoNicholas H's profile photo
3 comments
 
Wow... two years prison and year of supervision for a lousy lawn mower and some tools. Seems a bit crazy to me. And somehow from burglary he gets a longer sentence based on a higher "chance" at violent crime something he didn't even do.... wow... Shades of Minority report.
Add a comment...
Daniel's Collections
People
In his circles
1,636 people
Have him in circles
30,624 people
Delia Garza's profile photo
Derrick Clinton's profile photo
Eric Ensley's profile photo
mak baines's profile photo
Daniel Lambur's profile photo
Marjorie Powell's profile photo
Katy Shestopalova's profile photo
Steven Blocker's profile photo
Yuniors Motaas's profile photo
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Internet
Previously
Wildomar, CA - Riverside, CA - Urbana, IL - Normal, IL - New York, NY - Onjuku, Japan - Hong Kong, China - Black Rock City, NV - Santa Fe Springs, CA
Story
Tagline
Robot. Made of smaller robots.
Introduction
I've written under the handle Eripsa for over a decade on various blogs and forums. Today I do my blogging and research at Fractional Actors and on my G+ stream.

I'm interested in issues at the intersection of the mind and technology. I write and post on topics ranging from AI and robotics to the politics of digital culture.

Specific posting interests are described in more detail here and here.

_____

So I'm going to list a series of names, not just to cite their influence on my work, but really to triangulate on what the hell it is I think I'm doing. 

Turing, Quine, Norbert Wiener, Dan Dennett, Andy Clark, Bruce Sterling, Bruno Latour, Aaron Swartz, Clay Shirky, Jane McGonical, John Baez, OWS, and Google. 

______


My avatar is the symbol for Digital Philosophy. You can think of it as a digital twist on Anarchism, but I prefer to think of it as the @ symbol all grown up. +Kyle Broom helped with the design. Go here for a free button with the symbol.

Work
Occupation
Internet
Basic Information
Gender
Male
Other names
eripsa
Daniel Estrada's +1's are the things they like, agree with, or want to recommend.
Santa Fe Institute
plus.google.com

Complexity research expanding the boundaries of science

Center Camp
plus.google.com

Center Camp hasn't shared anything on this page with you.

Augmata Hive
plus.google.com

experimenting with synthetic networks

Ars Technica
plus.google.com

Serving the technologist for over 1.3141592 x 10⁻¹ centuries

Burn, media, burn! Why we destroy comics, disco records, and TVs
feeds.arstechnica.com

Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c

American Museum of Natural History
plus.google.com

From dinosaurs to deep space: science news from the Museum

Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
feedproxy.google.com

Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks

Honeybees may have personality
feeds.arstechnica.com

Thrill-seeking isn't limited to humans, or even to vertebrates. Honeybees also show personality traits, with some loving adventure more than

DVICE: The Internet weighs as much as a largish strawberry
dvice.com

Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want

DVICE: Depression leads to different web surfing
dvice.com

While a lot of folks try to self-diagnose using the Internet (Web MD comes to mind), it turns out that the simple way someone uses the Inter

Greatest Speeches of the 20th Century
market.android.com

Shop Google Play on the web. Purchase and enjoy instantly on your Android phone or tablet without the hassle of syncing.

The Most Realistic Robotic Ass Ever Made
gizmodo.com

In the never-ending quest to bridge the uncanny valley, Japanese scientists have turned to one area of research that has, so far, gone ignor

Rejecting the Skeptic Identity
insecular.com

Do you identify yourself as a skeptic? Sarah Moglia, event specialist for the SSA and blogger at RantaSarah Rex prefers to describe herself

philosophy bites: Adina Roskies on Neuroscience and Free Will
philosophybites.com

Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that

Stanford Researchers Crack Captcha Code
feedproxy.google.com

A research team at Stanford University has introduced Decaptcha, a tool that decodes captchas.

Kickstarter Expects To Provide More Funding To The Arts Than NEA
idealab.talkingpointsmemo.com

NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O

How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
arstechnica.com

IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o

NYT: Google to sell Android-based heads-up display glasses this year
www.engadget.com

It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the

A Swarm of Nano Quadrotors
www.youtube.com

Experiments performed with a team of nano quadrotors at the GRASP Lab, University of Pennsylvania. Vehicles developed by KMel Robotics.