top of page
Writer's pictureMatt Trossen

This is the Transistor of Robotics

electrohydraulic musculoskeletal robotic leg
electrohydraulic musculoskeletal robotic leg

Researchers at the Max Planck Institute for Intelligent Systems (MPI-IS) and ETH Zurich in a research partnership called Max Planck ETH Center for Learning Systems (CLS) have developed an “electrohydraulic musculoskeletal robotic leg.” Read the article on Nature (Full citation at the end of this article): https://rdcu.be/dWBvY


The first transitor
The first transistor, assembled by Walter Brattain and successfully tested for the first time on December 16, 1947 (Computer History Museum)

It is my opinion that we are looking at a “first transistor” moment in robotics… well, maybe not THIS exact artificial muscle, but an artificial muscle like this very one. It may not look like much, but first prototypes of new technology rarely do. It is only a wisp of a hint of what it can become. Take a look at the adjacent picture to the right of the very first transistor; we can now fit 10 BILLION of them on a chip that fits in the palm of a child's hand. So, imagine that flimsy little plastic accordion one day turning into a powerful biceps on a robot arm that can perform Olympic gymnastics or collect garbage from the curb.


Robotics is a fascinating blend of humankind's two greatest creations—the industrial age of automating physical labor and the information age, culminating with AI, which automates intellectual labor. Robotics is their love child, and powerful it will be. If you thought the internet strongly impacted the world, wait until robotics hits its tipping point. And with the advent of artificial muscles, it is about to. Mark my words.


The last time I made a bold public prediction was in 2007 in a speech I gave at the Robo Development Conference and Expo. You can still watch it on YouTube. (It is split into ten videos, because way back then you could only post videos up to 10 minutes! I’m so old.)


Here is a link to a playlist of all ten:

Matt Trossen Speech

I spoke about how robotics was in dire need of a software solution that facilitated modularity and the sharing of advancements. Then Willow Garage came along to steal my idea and made ROS! Since Willow Garage is long gone, no one is around to say that I am making stuff up. (I am totally making stuff up.) Since then robotics has accelerated exponentially with the ability for the open source community to share advancements based on ROS as well as hardware becoming far more plug and play. Modularity and standards are the natural evolution for every technology adoption lifecycle. From graphics cards in computers working with thousands of motherboards, to tires fitting countless tire rims on cars, to the plumbing connections in your bathroom, open standards are necessary for new technology to modularize and spread throughout a marketplace as companies become specialists in their given niche and need their products to work with others. It is always a fascinating process to observe and robotics is still in the very very early days. Someday there may be a store that has robot parts like arms and eyes and hands that can be bought and plugged into your home model to repair or upgrade a part.


With the evolution of new technologies that are world-changing in scope, there are tipping points where a specific problem is solved that catapults the technology forward by an order of magnitude. In the realm of computers, there was the need to shrink the physical size of the logic gates and reduce energy costs. Early computers used actual physical relays until the vacuum tube came along, which removed moving parts and sped up computation. But vacuum tubes were still painfully large, costly and complex to make, and energy inefficient. Computers would never leap from industrial to commercial use until the logic gates exponentially reduced in size and energy cost. That magic leap was the transistor.


When I get into conversations with friends, family, and random strangers at the bar and the conversation turns to animal-like robots such as Boston Dynamics Spot or Unitree’s GO2 quadrupeds, (Which we sell, by the way. Go and buy one!) https://www.trossenrobotics.com/research-quads, the response is usually split between people finding them fascinating or finding them terrifying. People excitedly or nervously talk about how they will become war dogs or police dogs that no one can outrun.


I always say...

You have nothing to worry about right now because these robots are far too heavy, slow, and uncoordinated to be a threat. Not to mention, they can’t get very far until they fall over from lack of battery life.

That usually calms people down until I follow up with

but wait until we make artificial muscles… Then we are all screwed!

I paint a picture of where this is going once we figure out artificial muscles.

Forget robot dogs. Imagine a robot spider about three feet wide and two feet tall, built with artificial muscles with full ball joint articulation! It would be lightweight, strong, and terrifyingly fast. Like a tarantula the size of a rottweiler. It could scale any hill, debris pile, wall, fence, building, or obstacle twice as fast as any human. Think about it!

I say excitedly as people move away from me at the party.

We can already run AI simulations to train gait on legged robots virtually. Computing hundreds of thousands of training sessions in a matter of hours. In just a few days, it can learn how to traverse any terrain, fight any animal, or search any area. Literally thousands of lifetimes for an animal in days. Of course, it would be networked to the other dozen or so spiderbots in the area and cloud-connected to the internet, which has the entirety of human knowledge on it as a database.

At this point, I’m wondering where everyone went…


Robot Spiders
Image generated from Midjourney

This thought experiment tends to scare the shit out of people. And it will be a reality in the very near future. Hopefully, not to chase people down but to accomplish tasks the world desperately needs automated, like cleaning up our oceans, forests, streets, and garbage dumps. Imagine those spiderbots as rows of arms dangling over conveyor belts of refuse, picking out and sorting our garbage to be recycled instead of ending up in a landfill or our rivers. Or out crawling through farm fields, pulling weeds, or inside large vertical greenhouses picking tomatoes. That is where this technology is going. And yes, it will replace factory jobs, but people don’t want those jobs anymore. Just like the CNC mill replaced manually fabricating small metal parts all day by hand. Automation is the natural progression of the industrial age and has been happening for hundreds of years now.


So why are artificial muscles so necessary? Current robotics use traditional motors to actuate their joints. These are comparatively very heavy, energy-intensive, and still relatively expensive. They also are the wrong kind of actuation. Motors spin, so they require complex mechanics to translate that rotational motion into linear motion. That adds weight and cost. They are also bulky, and it’s difficult to put many of them into one joint to mimic the ball joints that billions of years of evolution have perfected. The shoulders and hips of animals provide complex articulation with a high degree of control, making them excellent for locomotion in an uneven world. 


Rotating wheels driven by rotating motors work great for vehicles when we have flat man-made surfaces all around to operate on. But if you throw in stairs, grassy hills, or rocky trails, the wheel fails pretty quickly. There is a reason nature has only invented a rotating wheel joint once in bacterial flagellum, a corkscrew-like tail that bacteria use to swim.



There are many examples of nature using rolling as a form of locomotion, which is wheel-like, but not an actual wheel. The wheel spider is a favorite of mine!


 

"Robotics is the art of reverse engineering nature." ™

 

I often describe robotics as “the art of reverse engineering nature.” It is the best way I can think to describe what we roboticists do when creating advanced free-roaming robots that try to navigate and interact with the world. 


Robotic Eye
Image generated from Midjourney

Like cavemen and tools, robotics started with whatever already existed. Over time, the tools, resources, learnings, materials, and more have been refined and honed to make them more efficient and effective, bringing our creations closer to emulating natural systems. A prime example of this evolution is evident in the development of advanced vision systems and the replication of cognitive processes akin to the human brain within these robots.


We started with heavy use of sonar, radar, and lidar to allow robots to sense the world around them. Simple ultrasonic sensors can tell if something is in front of the sensor (think parking sensors on a car), and scanning radar or sonar can create 2D or 3D point cloud maps that are more useful. More recently, LiDAR, which stands for Light Detection and Ranging, has become cost-effective enough for broad deployment. LiDAR uses light, and it is a form of camera. The most recent advancements have been the use of stereo cameras. In other words, electronic eyes. Depth perception is still being calculated with point clouds, but more and more scientists are using simple raw visual data and machine learning to allow robots to interpret what they see. They are reverse engineering nature. We are learning to make artificial eyeballs and visual cortexes. Why is this the way to go? Just look at nature to see why. Eyes are super low-cost (in energy) and highly versatile—when it comes to sensing the world around an animal, it is the most common “sensor” in use. Unlike the wheel, sonar has evolved to far wider adoption in nature (bats and dolphins), but it still falls far behind the all-powerful eye. I’ve often referred to engineers using radar or sonar in robotics as “cheating” because we have just been using an inferior technology until we perfect the real one, eyeballs.


 

Fun note!

Youtube Playlist

The author is the Chief Nerd Officer (CNO) at Trossen Robotics who makes the Aloha kits currently used in the nascent days of machine learning training. Stanford University designed the first Aloha kits (https://tonyzhaozh.github.io/aloha/) as a method to do robotic bimanual machine learning and training. Their project exploded onto the scene and has evolved into hundreds of labs worldwide using similar kits for machine learning R&D. It works by having a human guide the follower arms by manipulating the pair of leader arms. The system records all the joint states and visual data from multiple cameras. This data is then used to train machine learning models for imitation and reinforcement learning.  Want to learn more about how Aloha kits work and the processes for data collection, model training and evaluation? Watch our Machine Learning Series and subscribe for all the latest updates.


 

Robot Brain
Image generated from Midjourney

Brains are another area in nature that scientists are working on reverse engineering. The holy grail of computing is AGI (Artificial General Intelligence), and in just the last few years, we have made massive strides towards getting there on the software side. Software that must run on hardware mimicking nature's very efficient wetware. We have figured out how to pack logic gates (neurons) about as tight as physics will allow us to go so that we have very compact machines capable of immense computational power. But we still haven’t made a brain. Scientists are still learning how to reverse engineer the structure of a brain and the way software (thoughts) use that wetware.


The human brain contains roughly 86 billion neurons¹ firing at a rate of up to 200hz², equating to a computational power of 10-100 petaflops³, whilst only consuming around 20 watts of power, weighing around 1.4kg and taking up only 1200-1500cc of space. That’s a very dense, very powerful computer that sips power. Nature 1, Technology 0. 


When you step back and take a broad view of the evolution of robotics, we can see how it is slowly but surely getting closer and closer to how nature works and, in essence, is us reverse engineering the very building blocks we are made of. Clearly, muscles are a piece of that puzzle. We need to go from heavy energy-intensive metal coils to more advanced materials that can contract with a fraction of the energy input currently used. Once that’s figured out, it is just a matter of trial and error until we optimize muscular skeletal structures of our liking and make countless kinds of creations with god knows how many arms and legs.


 

PS:  Robotics actually needs not one but three significant advancements to truly explode. Artificial muscles are one of them, AGI, which we briefly mentioned, is the second, and the third is energy storage. We need much, much, much better batteries. They need to store more energy in lighter packages and it would also be helpful if they didn’t burst into flames so easily. Nature is incredibly energy efficient—whilst a robot needs a hefty battery to store enough energy to run for a few hours, a human can run most of a day on a peanut butter sandwich—truly amazing! To match that level of output, we need energy centers in mobile robotics that match our stomachs. We don’t know if there will be a transistor moment in this area of robotics. But, it will most likely be a slow and steady progression of more minor advances in many different areas that will continue to increase energy storage. Every day, my phone shows me articles claiming all these amazing battery breakthroughs, and I eagerly anticipate the day they actually come to market. Because when they do, the robots we make at Trossen Robotics will run much much longer and provide even more value to the scientists using them.


Matt Trossen is founder and CEO of Trossen Robotics. Matt has been tinkering and building robots for twenty years and continues to swear that this is “a temporary project.”


 

Citations


Buchner, T.J.K., Fukushima, T., Kazemipour, A. et al. Electrohydraulic musculoskeletal robotic leg for agile, adaptive, yet energy-efficient locomotion. Nat Commun 15, 7634 (2024). https://doi.org/10.1038/s41467-024-51568-3 | https://www.nature.com/articles/s41467-024-51568-3


1. Azevedo, F.A., et al. (2009). "Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain." Journal of Comparative Neurology.

2. Purves, D., et al. (2018). "Neuroscience." 6th Edition. Oxford University Press. This source provides details about the neuron firing rates.


3. Markram, H. (2012). "The Human Brain Project: A Report to the European Commission." Discusses the estimated computational power of the brain.


4. Laughlin, S.B., & Sejnowski, T.J. (2003). "Communication in Neuronal Networks." Science. Discusses the energy efficiency of brain functions.

8 views0 comments

Comments


bottom of page