Learn like a human-IEEE Spectrum

2021-11-22 07:30:19 By : Ms. Maggie Chen

The IEEE website places cookies on your device in order to provide you with the best user experience. By using our website, you agree to the placement of these cookies. To learn more, please read our privacy policy.

By the age of five, children can understand spoken language, distinguish between cats and dogs, and play hide-and-seek games. These are three things that humans think are easy but computers and robots currently cannot do. Despite decades of research, we computer scientists have not yet figured out how to use computers to complete the basic tasks of perception and robotics.

The few successes we have made in building "smart" machines are also notable in what they can and cannot do. The computer can finally play chess. But the show to be able to play the world championship is not about chess, let alone learning backgammon. Today's programs can only solve specific problems at best. Humans have extensive and flexible capabilities, but computers do not.

Maybe we have been doing it the wrong way. For 50 years, computer scientists have been trying to make computers smart, and most of them have overlooked one smart thing: the human brain. Even the so-called neural network programming technology is based on a highly simplified way of brain operation.

In some respects, the task was wrongly presented from the beginning. In 1950, Alan Turing, a computer pioneer who broke codes in the United Kingdom during World War II, proposed to redefine the problem of defining artificial intelligence as a challenge, which has since been called the Turing test. In short, it asks whether a computer hidden out of sight can conduct conversations in a way that is indistinguishable from humans.

So far, the answer is no. Turing's behavioral framework for this problem keeps researchers away from the most promising research approach: the human brain. Many people know that the brain must work in a very different way from a digital computer. So, to build intelligent machines, why don't you understand the working principle of the brain, and then ask us how to copy it?

My colleagues and I have been pursuing this approach for many years. We focus on the neocortex of the brain and have made significant progress in understanding how it works. We call our theory Hierarchical Temporal Memory, or HTM, for reasons I will explain later. We created a software platform that allows anyone to build HTM for experimentation and deployment. You don't write HTM like a computer; you configure it with software tools and then train it by exposing it to sensory data. Therefore, the learning method of HTM is roughly the same as the learning method of children. HTM is a rich theoretical framework that cannot be fully described in a short article like this, so I will only give a high-level overview of the theory and technology. For more information about HTM, please visit http://www.numenta.com.

Goldenrod: The neocortex, the center of human reasoning, is an amazing organ. Its approximately 30 billion neurons are organized into six layers, each layer about the thickness of a playing card. The picture here is an image from the Blue Brain Project, showing neurons from the fifth layer. The Blue Brain project is a joint research project between IBM and Lausanne Federal Institute of Technology, which aims to use IBM's Blue Gene supercomputing system to study the brain.

First, I will describe the basis of HTM theory, and then I will introduce the tools for building products based on it. I hope to attract some readers to learn more and join our work.

We focused our research on the neocortex because it is responsible for almost all higher-level thinking and perception. This role explains its unusually large size in humans—about 60% of the brain’s capacity [see illustration "Golden Rod", Left]. The neocortex is a thin layer of cells folded to form convolutions. These convolutions have become visual synonymous with the brain itself. Although the various parts of the watch deal with different problems, such as vision, hearing, language, music, and motor control, the neocortical watch itself is very uniform. Most parts look almost the same at the macro and micro levels.

Because of the unified structure of the neocortex, neuroscientists have long suspected that all parts of it work on a common algorithm—that is, the brain uses a single, flexible tool to listen, see, understand language, and even download chess. Many experimental evidence supports the view that the neocortex is a universal learning machine. What it learns and what it can do depends on the size of the neocortical sheet, the sensation that the sheet is connected to, and the training it receives. HTM is a theory of new cortical algorithms. If we are right, it represents a new way to solve computational problems that we have not solved so far.

Although the entire neocortex is fairly uniform, it is divided into dozens of areas and does different things. For example, some regions are responsible for language, other regions are responsible for music, and some regions are responsible for vision. They are connected by nerve fiber bundles. If you make connection diagrams, you will find that they trace layered designs. The senses feed input directly to certain areas, which feed information to other areas, which in turn send information to other areas. Information also flows down the hierarchy, but due to different up and down paths, the hierarchical arrangement is still clear and well-documented.

As a general rule, neurons in the lower layers represent simple structures in the input, while neurons in the higher layers represent more complex structures in the input. For example, input from the ear traverses a series of regions, each region representing an increasingly complex aspect of the sound. When the information reaches the language center, we find that the cells respond to words and phrases that are independent of the speaker or pitch.

Because the cortex area closest to the sensory input is relatively large, you can imagine the hierarchical structure as the root system of a tree, where the sensory input enters from the broad bottom and advanced thoughts occur in the trunk. There are many details that I omitted; the important thing is that the hierarchy is the basic element of how the neocortex builds and stores information.

HTM is similarly built around a node hierarchy. Hierarchical structure and how it works are the most important features of HTM theory. In HTM, knowledge is distributed on many nodes above and below the hierarchy. The memory of the dog's appearance will not be stored in one location. Low-level visual details such as fur, ears, and eyes are stored in low-level nodes, while high-level structures, such as the head or torso, are stored in high-level nodes. [See the illustrations, "Everyone knows you are a dog" and "Higher and Higher". ] In HTM, you cannot always locate this knowledge specifically, but the overall idea is correct.

Hierarchical representation solves many problems that plague AI and neural networks. Systems fail usually because they cannot handle large, complex problems. Either it takes too long to train the system, or too much memory is needed. On the other hand, the hierarchical structure allows us to "reuse" knowledge, thereby reducing training. When training HTM, low-level nodes learn first. The representations in the higher-level nodes then share what was previously learned in the lower-level nodes.

For example, a system may require a lot of time and memory to understand what a dog looks like, but once it does so, it can learn what a cat looks like in a shorter period of time with less memory. The reason is that cats and dogs have many low-level features, such as fur, paws, and tails, and you don’t have to relearn these features every time you encounter a new animal.

The second fundamental similarity between HTM and neocortex is the way they use time to understand the fast-flowing river of data received from the outside world. At the most basic level, each node in the hierarchy learns common, continuous patterns, similar to learning melody. When a new sequence appears, the node matches the input with the previously learned pattern, similar to recognizing a melody. Then the node outputs a constant pattern representing the best matching sequence, similar to a named melody. Given that the output of one layer of nodes becomes the input of the next layer of nodes, the hierarchical structure learns the sequence of sequence sequences.

This is how HTM transforms the rapidly changing sensory patterns at the bottom of the hierarchy into relatively stable ideas and concepts at the top. Information can flow down the hierarchy, expanding the sequence of sequences. For example, when you give a speech, you start with a series of high-level concepts, each concept expands into a sequence of sentences, each expands into a sequence of words, and then phonemes.

Another more subtle way HTM uses time is how it decides what to learn. All parts of it learn by itself, without the programmer or anyone else telling the neuron what to do. We would love to try to play such a coordination role by deciding what the node should do in advance, for example, "Node A will learn to recognize eyes and ears, and Node B will learn nose and fur." ​​However, this method does not work. As nodes learn, they will change their output-this will affect the input of other nodes. Because the memory in HTM is dynamic, it is impossible to decide in advance what the node should learn.

So how does the node know what to learn? This is where time plays a key role and one of the unique aspects of HTM. The patterns that appear in time usually have a common cause. For example, when we hear a series of notes over and over again, we learn to recognize them as a single thing, a melody. We do the same with visual and tactile modes. For example, seeing a dog moving in front of us tells us that a left-facing dog is actually the same as a right-facing dog, even though the actual information on the retina is different every moment. moment. HTM nodes learn similarly; they treat time as a teacher. In fact, the only way to train an HTM is to use input that changes over time. How to do this is the most challenging part of HTM theory and practice.

Since HTM, like humans, can recognize spatial patterns such as static pictures, you may think that time is not necessary. It's not like this. Although it may seem strange, we cannot learn to recognize pictures without first training on moving images. You can see the reason in your own behavior. When you face a new, confusing object, you pick it up and move it before your eyes. You look at it from different directions and top and bottom. When an object moves and the pattern on the retina changes, your brain assumes that the unknown object has not changed. Nodes in HTM combine different input patterns. Assuming that the two patterns recur very closely in time, there is probably a common cause. Time is the teacher.

The last word in HTM is "memory". This attribute distinguishes the HTM from the programmed system. Most of the work of building an HTM-based system is spent training the system by exposing the system to sensor data, rather than writing code or configuring the network. Some people think that memory refers to a single remembered instance, such as "what did I eat for lunch". Others associate memory with computer memory. In the case of HTM, neither is. HTM is a hierarchical, dynamic, memory system.

What makes HTM different from other machine learning methods? HTM is unique, not because we have discovered some new and magical concepts. HTM combines the advantages of several existing technologies and adds some twists and turns. For example, hierarchical representation exists in a technique called hierarchical hidden Markov models. However, the hierarchical structure used in HHMM is simpler than the hierarchical structure in HTM. Although HHMM can learn complex temporal patterns, they cannot handle spatial changes well. It's as if you can learn melodies, but you can't recognize them when you play with different keys. Nonetheless, the similarity between HTM and other methods bodes well: it means that others have come to similar conclusions. Numenta's website provides detailed comparisons with other technologies.

Another unique aspect of HTM is that it is a biological model as well as a mathematical model. The mapping between the HTM and the detailed anatomy of the neocortex is deep. As far as we know, no other model can approach the level of biological accuracy of HTM. The mapping is so good that whenever we encounter theoretical or technical problems, we still seek guidance in neuroanatomy and physiology.

Finally, HTM worked. "If we really understand a system, we can build it," said Carver Mead, a well-known electrical engineer at the California Institute of Technology. "On the contrary, we can be sure that we will not fully understand the system until we synthesize and demonstrate a working model." We have built and tested sufficiently complex HTMs to see if they are effective. They at least solved some difficult and useful problems, such as dealing with distortions and differences in visual images. Therefore, we can identify dogs in simple images, whether they are facing right or left, large or small, seen from the front or back, and even in grainy or partially occluded images.

So far, we are satisfied with the results and believe that HTM can be applied to many other problems that we have not tried. We also know that our current HTM implementation cannot solve problems today, and we have not yet established a very large network. Time will tell us what obstacles we will encounter in the future, but for now, we are encouraged by the operation of this technology.

In 1979, shortly after graduating from Cornell University with a major in electrical engineering, I had the idea of ​​combining computer science with brain research. For many years, I have been working part-time as an engineer for several companies (including the two companies I co-founded). -Palm Computing and Handspring), I worked full-time for a year when I was a graduate student in biophysics at the University of California, Berkeley.

In 2002, with the encouragement of some neuroscientists, I founded the Redwood Institute of Neuroscience. For three years, I have collaborated with about 10 other scientists on various aspects of neocortical anatomy, physiology, and theory. In those years, more than 100 other scientists visited RNI to discuss and debate, and I made progress in HTM theory.

By 2004, I had developed the essence of HTM, but the theory was still rooted in biology (I published a part of the biological theory in my book On Intelligence in 2004). I don't know how to turn biological theory into practical technology. A colleague of mine, Dileep George, knew about my work and created the missing link. He showed how to model HTM as a Bayesian network, a well-known technique that resolves ambiguity by assigning relative probabilities in problems with many conflicting variables. George also showed that we can build HTM-based machines.

His prototype application is a vision system that can recognize line drawings of 50 different objects without being affected by size, position, distortion, and noise. Although it is not designed to solve practical problems, it is impressive because it accomplishes things that other visual systems that we know cannot do.

In 2005, with the neocortical theory, mathematical expressions of the theory, and a working prototype, George and I decided to start Numenta in Menlo Park, California. Our experience in industry and academia tells us that people are in the industry, especially if there are opportunities to build exciting products and new businesses. Today, RNI continues to serve as the Redwood Theoretical Neuroscience Center at the University of California, Berkeley. George and 15 other employees work in Numenta, and I divide my time between Numenta and Palm.

In order to help Numenta start an industry based on the HTM concept, we set out to create a set of tools so that anyone can experiment with HTM and use HTM to solve real-world problems. By making the tool widely available, providing source code for many parts of the tool, and encouraging others to extend and commercialize their applications and enhancements, we hope to attract engineers, scientists, and entrepreneurs to understand HTM. Together, these tools constitute an experimental platform for HTM, which aims to attract many developers by giving them the opportunity to create a successful business.

We realize that it takes a lot of time and energy to learn a new way of computing. One thing I learned is never to be an obstacle for developers to work hard on a new platform. To this end, we have created a large number of documents and a few examples. We provide complete source code for two of the three components of the platform.

The first part is the runtime engine. This component is a set of C routines used to manage the creation, training, and operation of HTM networks on standard computer hardware. It is highly scalable, which means you can run the entire platform on any device, from laptops with a single CPU to PCs with multiple cores, to large computer clusters. Major development can be done on a single CPU; however, as the scale of the application grows, multiple CPUs may be required to improve performance.

Today, the runtime engine runs on Linux. Our employees and customers use PCs and Macs at the same time. When designing an HTM-based system, developers need to try different configurations. Running concurrent tests can save a lot of time. Our tools make parallel testing easier.

The runtime engine ensures that developers do not need to worry about how to transmit messages between nodes or share data between machines. When the runtime engine runs the HTM network on the server cluster, it automatically processes messages back and forth efficiently. By taking care of the pipeline, the engine allows developers to focus on HTM design, training, and results.

The second platform component is a set of tools used by HTM developers to create, train, and test networks. These tools are written in the Python scripting language, and the source code can be used for inspection and modification, as well as documentation of the application programming interface or API of the runtime engine.

Although the first set of tools is sufficient, some people may want to modify and enhance them. For example, they may need a tool to visualize the data structure of the HTM node of the vision system, while another HTM tool for analyzing business data is slightly different. We can also imagine value-added resellers packaging kits and pre-trained HTMs for specific applications or specific sensor systems (such as radar or infrared cameras).

Developers can package and sell tools derived from Numenta source code for free. One of our requirements is that these tools can only be used with Numenta's runtime engine.

The third part of the platform is a plug-in API and related source code, which together enable developers to create new types of nodes and plug them into the network. There are two types of nodes: basic learning nodes (which may appear anywhere in the network) and interface nodes (from the network to sensors that provide input or effectors that receive output). Numenta provides a basic learning node and several sensor nodes. When creating a network, you choose a type for each node in the network, and in some cases modify them so that they can connect to the new type of sensor.

People with sufficient mathematical skills may wish to improve the learning algorithm. We expect such enhancements to continue for several years. Numenta is providing source code to its existing nodes so that developers can understand how they work and improve them as needed. Developers are free to sell nodes derived from Numenta source code without paying any fees to Numenta. Similarly, our only requirement is that nodes derived from Numenta source code can only be used with Numenta's runtime engine.

Finally, we created a set of sample HTM networks, as well as documentation and training materials to help developers get started.

What can you do with Numenta's platform? Through many discussions with developers, we have determined that HTM can be applied to a wide range of applications. For example, there is a type of problem similar to identifying the content of a picture-we call it spatial reasoning. Given a new image or pattern, HTM will tell you what it is. The data may come from cameras or other types of sensors, such as lidar. Car manufacturers have several such applications.

We have also seen many applications that model networks, including computer networks, power networks, machine networks, and even social networks. In such applications, HTM learns to identify or predict undesirable future conditions in a given network.

HTM did any work for them trying to understand the data set. Petroleum exploration provides an example because a large amount of data must be collected and analyzed to determine the best drilling location. Pharmaceutical companies are interested in understanding whether HTM can help them discover new drugs by teasing out clues from the data they have accumulated in drug trials.

HTM works best when there is a hierarchical structure in the data. Enterprises have such a structure, managers, supervisors, and workers are organized hierarchically, as are business functions such as finance, sales, and accounts receivable. A large business consulting company contacted us, hoping to use our tools to study company behavior. Can we use HTM to model business functions? We don't know yet, but I think so, although this is not one of the classic AI problems.

For many applications of HTM, the trickiest part may be deciding what the "sensory" data to train is and how to present them in a form that changes over time.

We also see interest in modeling manufacturing processes. Many industries have a lot of data, but there is no easy way to discover patterns in the data, and there is no way to make predictions based on past experience.

These and other potential applications of HTM are described in the documentation on the Numenta website and in my book.

Given the state of the platform, we know that there are some applications that we cannot solve today. Our current node algorithm cannot support anything involving long memory sequences or specific timings. For example, understanding spoken language, music, and robotics requires precise timing. This limitation is not a fundamental limitation of the Numenta platform, but a limitation of the tools and algorithms we currently have. We plan to enhance the algorithm to add these features in the future.

This version of the Numenta platform is called the research version, as opposed to the usual alpha or beta version, although the quality and documentation standards are sufficient to make it a reliable beta version. We chose the term "research release" because we found that it takes most people months to grasp the concept of HTM and make the most of the platform.

By analogy, suppose you are asked to write a complex software program, such as a spreadsheet application. In addition, it is assumed that you have never been exposed to computers, computer concepts, compilers or debuggers. Before you can start the work of the spreadsheet program correctly, you need to experiment and learn for several months. The concept behind HTM is also a bit strange to the tools we created.

In addition, our first learning algorithm is not as easy to use as we hoped. They have some parameters, and their appropriate values ​​need to be determined through experiments and analysis. It's not too difficult, but it's not trivial either. We believe we can eliminate this step in future versions, but most applications currently require it. Although installing the platform is simple, easy to get started, and often fun, users of the first version of Numenta should spend months to a year experimenting before they can obtain commercially useful results.

Although we may charge for our tools in the future, the research version is available for free.

HTM is not a model of the whole brain or even the entire neocortex. Our system has no desires, motives, or intentions of any kind. In fact, we don’t even want to make machines like humans. Instead, we want to use a mechanism that we believe is the basis of human thought and perception. This principle of operation can be applied to many problems in pattern recognition, pattern discovery, prediction, and ultimately robotics. But it is not our mission to strive to build machines that pass the Turing test.

The best analogy I can make is to go back to the beginning of the computer age. The first digital computer worked on the same principle as today’s computers. However, 60 years ago, we were just beginning to understand how to use computers, which applications are best for them, and what engineering problems must be solved to make them easier to use, more capable, and faster. We have not invented integrated circuits, operating systems, computer languages, or disk drives. Our understanding of HTM today is at a similar stage of development.

We have recognized the basic concept of how the neocortex uses hierarchy and time to create a model of the world and treat the new model as part of the model. If we are right, the real era of smart machines may have just begun.

This article was edited on March 28, 2007.

Jeff Hawkins, the inventor of Palm Pilot, is the founder of Palm Computing, Handspring and Redwood Neuroscience Institute. In 2003 he was elected a member of the National Academy of Engineering.

Hawkins and Sandra Blakeslee's book On Intelligence was published in 2004 by Times Books. The supporting website for the book is located at http://www.onintelligence.org.

Learn more about Numenta at http://www.numenta.com. The Redwood Theoretical Neuroscience Center is located at http://redwood.berkeley.edu.

Psyonic's prosthesis vibrates to simulate touch

Joanna Goodrich is an assistant editor of The Institute, covering the work and achievements of IEEE members as well as IEEE and technology-related events. She holds a master's degree in health communication from Rutgers University in New Brunswick, New Jersey

Psyonic's capable hand uses vibration to remind the user when he touches an object, as well as instruct him to grasp it and how long to let it go.

While visiting Pakistan with her parents, 7-year-old Aadeel Akhtar met a girl of the same age who lost her right leg. That was the first time he met someone with a limb. The girl’s family cannot afford to give her a prosthetic leg, so she uses a branch as a cane to help her walk. From that encounter, Akhtar decided that one day he would develop a reasonably priced prosthesis.

Twenty-one years later, in 2015, the IEEE member founded Psyonic, a company that designs and manufactures advanced and affordable prostheses. Akhtar is the CEO. The Champaign, Illinois-based startup released its first product, Ability Hand, in September. It is the fastest bionic hand on the market and the only hand with touch feedback.

The prosthesis uses pressure sensors to simulate touch through vibration. It functions almost like an ordinary hand. All five fingers on the lightweight prosthesis can bend and stretch. It provides 32 different grips.

"The most important thing for us is to provide people with a powerful prosthesis that allows them to do things they never thought they could do again," Akhtar said.

In the United States, patients 13 years of age or older can use "Hands of Ability."

Akhtar initially wanted to work with amputees as a doctor. He received a bachelor's degree in biology from Loyola University in Chicago in 2007. But during his degree, he took a computer science course and fell in love with the subject.

"I like everything about engineering, programming and building things," he said. "I want to find a way to combine my interest in engineering and medicine."

He went on to obtain a master's degree in computer science in 2008, also from Loyola. Two years later, he was selected by the Medical Scholars Program of the University of Illinois at Urbana-Champaign. This program allows students to obtain a doctor of medicine and a doctorate at the same time. In series. Akhtar received a master's degree in electrical and computer engineering and a doctorate in neuroscience in 2016, but has not yet completed a medical degree.

His Ph.D. research focuses on developing things that will eventually become capable hands.

In 2014, he and another graduate student, Mary Nguyen, collaborated with the Range of Motion Project, a non-profit organization that provides prosthetic equipment to people around the world who cannot afford prosthetics. Akhtar and Nguyen flew to Quito, Ecuador, to test their products on Juan Suquillo, who lost his left hand in the 1979 border war between Ecuador and Peru.

"Everything we do is patient-centered."

Using the prototype, Suquillo was able to pinch the thumb and index finger together for the first time in 35 years. He reported that he felt part of himself recovered due to the prosthesis. Akhtar said that after receiving feedback, he hopes that "everyone will feel the same way as Juan used our prosthetic hand."

Soon after returning from that trip, Akhtar founded Psyonic. In order to get advice on how to run the company and possibly win some money, he added a bionic hand to the Cozad New Venture Challenge at the University of Illinois. The competition provides coaching for the team, as well as seminars on topics such as sales techniques and customer development. Psyonic won first place and received a prize of $10,000. The start-up company also won a $15,000 Samsung Research Innovation Award in 2015. Since then, Psyonic has received funding from the University of Illinois Technology Entrepreneur Center, iVenture Accelerator, and the National Science Foundation.

The start-up company currently has 23 employees, including engineers, public health experts, social workers and doctors.

Psyonic's artificial hand weighs 500 grams, which is about the average weight of an adult hand. Akhtar said that most prosthetic hands weigh about 20%. The Hand of Ability contains six motors mounted in a carbon fiber housing. It has silicone fingers, battery packs, and muscle sensors placed on the patient’s residual limb. For example, if the patient is amputated below the elbow, two muscle sensors will be placed on her complete forearm muscle. She will be able to use these sensors to control hand movement and grasping.

The capable hand connects to the smartphone application via Bluetooth, which provides users with another way to configure and control hand movements. The software of the hand is automatically updated through the application. The company said its battery is fully charged within an hour.

Akhtar is studying the prosthetic hand PSYONIC

Akhtar said that when talking to patients who used prosthetic hands, they mentioned issues such as lack of sensation and frequent breaks.

In order for the patient to have a sense of touch, the ability hand is equipped with pressure sensors on the index finger, little finger and thumb. When the patient touches an object, he will feel the vibration on the skin imitating the sense of touch. When the user touches an object, the prosthesis uses these vibrations to remind the user and instruct him to grasp it and when to let it go.

Akhtar said that most prostheses break because they are made of rigid materials such as plastic, wood or metal, and they do not bend when they hit a hard surface. He said that Psyonic uses rubber and silicone to make fingers, which are flexible and can withstand a lot of force.

Wrestle with a bionic hand! ? ! ? ! ? ! www.youtube.com

To test the durability of the hands, Akhtar competed with Dan St. Pierre, the national champion of American Para triathlon in 2018-2019.

Akhtar said that the Ability Hand is also waterproof.

"Everything we do is patient-centric," Akhtar said. "We hope to improve the quality of life for patients with limb differences as much as possible. Seeing the impact that the hand of supernatural power has had on people in such a short period of time motivates us to move on."

Psyonic and its partners are studying how to improve the prosthetic hand. Akhtar said that some partners, including Chicago's Ryan AbilityLab and the University of Pittsburgh, are developing brain and spinal cord implants that can help patients control prostheses. The implant can stimulate the area of ​​the brain that controls sensory intake. When the patient touches the prosthetic finger, the implant sends a signal to the brain to make the patient feel pressure.

Akhtar joined IEEE in 2010 when he was a doctoral student.

He has published papers on Psyonic's work at the IEEE/RSJ International Conference on Intelligent Robots and Systems and the IEEE International Conference on Robotics and Automation.

He said that IEEE provides a great “ecosystem” involving prosthetics and robotics. “It’s great to be a member of this community.” He added that being able to reach IEEE’s academic and professional community, among them Some are pioneers in the field, helping the company get important feedback on how to improve the hands, and contribute to the development of the legs in the future.

Your weekly selection of wonderful robot videos

Video Friday is your weekly selection of wonderful robot videos, collected by IEEE Spectrum Robotics friends. We will also publish a weekly calendar of upcoming robotic events in the coming months; this is what we currently have (send us your events!):

If you have any suggestions for next week, please let us know and enjoy today’s video.

We first saw Cleo Robotics at CES 2017, when they were showing a consumer prototype of their unique ducted fan drone. They just announced a new version of enhanced surveillance, which is actually called Dronut.

For such a small matter, the 12-minute flight time is not the worst. I hope it can find a unique niche market and help Cleo return to the consumer market because I want one.

This is some very, very impressive robust behavior on ANYmal, which is part of Joonho Lee's master's thesis at ETH Zurich.

The title of this DeepRobotics video is "The End is Coming." It's better not to think about it, maybe.

At Ben Gurion University of the Negev, they are trying to figure out how to make the COVID-19 officer robot authoritative enough so that people can really pay attention to it and do what it says.

You would think that high-voltage wires are the last thing you want to let drones fly, but here we are.

This is probably the highest speed multiplier I have seen in the robot video.

This is an interesting manipulator design of Yale University Grablab, which can be easily operated by hand.

The ugo robot that is just a ball with eyes on a stick is one of my favorite robots because it is unapologetically just a ball on a stick.

Robot, make me a sandwich. Then make me a bunch of sandwiches.

Refilling water bottles is not a very complicated task, but letting robots do it means that humans don't have to do it.

The failure mode, impact and diagnostic analysis (FMEDA) sets the safety and reliability calculation standard of the automatic protection system to IEC61508. However, FMEDA results are only as good as the failure rate data used to create them. The new component reliability database (CRD) overcomes limitations and improves accuracy.