AI Robots Could Be the Next Big Platform After Smartphones
Japan is betting that the next universal technology after the smartphone will not fit in your pocket. Instead, it might roll or walk into your living room.
Through the Moonshot research program funded by the Japan Science and Technology Agency, universities and labs across Japan are working toward a bold 2050 vision. One of the most ambitious goals is to integrate autonomously learning AI robots into everyday life, with a special focus on elderly care.
This is goal number three of the Moonshot initiative and it targets a very real challenge. Japan has a rapidly aging population and a growing need for support in caregiving tasks. The idea is to build robots that can handle everyday assistance such as cooking, cleaning and hygiene care so human caregivers can focus more on emotional support and quality of life.
Under the hood, this future of care robotics leans heavily on familiar PC and gaming technologies. NVIDIA GPUs, Jetson modules and RTX powered laptops are at the core of how these robots see, think and learn.
NVIDIA GPUs Inside: How AIREC Robots Learn to Care
The main family of senior care robots in this project is called AIREC, short for AI Driven Robot for Embrace and Care. There are different members in this family, each designed for specific tasks and powered by different NVIDIA platforms.
The larger, more mobile robot in the lineup is called Dry AIREC. It is equipped with two NVIDIA GPUs onboard. These GPUs handle the heavy lifting for AI inference and decision making in real time. That means things like understanding a person’s posture, planning movements and controlling the many motors and joints involved in safe physical assistance.
Another key platform in the project is AIREC Basic. This robot is used mainly for data collection and for training a motion foundation model. Instead of relying on a full desktop style GPU, AIREC Basic uses three NVIDIA Jetson Orin NX modules. These are compact, power efficient systems designed for edge AI, very similar in spirit to putting a mini gaming capable GPU and CPU package directly into a robot.
Running AI on the edge is crucial for this type of application. Care robots need low latency responses and cannot always depend on cloud connectivity. With Jetson Orin NX modules, they can process sensor data, run neural networks and react to situations directly on the robot.
The robots are not just learning in the real world. NVIDIA Isaac Sim, an open source robotic simulation framework, is used to train and test AIREC robots virtually. In simulation, developers can model environments, run thousands of scenarios and refine robot behaviors without risking human safety. Tasks like estimating forces between objects, planning motion around beds and interacting with fragile items can all be simulated at high speed.
According to researchers, the leap in generative AI and GPU acceleration in the past five years has turned what once felt like science fiction into something that can be meaningfully prototyped today. GPUs that gamers recognize for ray tracing and high frame rate performance are now also powering humanoid robots that might one day help someone’s grandparents.
Training Robots for Real World Care
A number of research teams are working on specific caregiving skills that these robots will need. The focus goes beyond simple chores and looks at very sensitive tasks usually handled by trained caregivers.
Some of the key activities under development include:
- Changing diapers and assisting with hygiene care
- Helping patients take baths safely
- Providing assistance with meals
- Repositioning patients in bed to prevent bed sores
One project highlighted in the initiative is led by bioengineering researchers from the University of Tokyo. Their work centers on automating repositioning in bed using a humanoid style robot. This is a complex challenge because the robot must understand the patient’s body position, health condition and comfort level, then apply just the right amount of force at the right places.
To train the Dry AIREC robots for this task, the team used laptops powered by NVIDIA RTX GPUs. These GPUs accelerate several demanding workloads:
- 3D posture estimation to understand how a patient is lying in bed
- Trajectory calculations to plan safe movement paths for the robot’s arms
- Force estimation to predict how much pressure to apply and when
The robot uses fisheye and depth cameras to capture a detailed view of the patient and surroundings. Movement data from skilled human caregivers is recorded and then used to calculate ideal motion trajectories. These trajectories guide the robot on how to move shoulders, knees and other body parts without causing pain.
The system also predicts the needed pressure at key contact points. By combining timing and force estimates, the robot can perform repositioning with controlled and safe movements.
Initial experiments were carried out with mannequins to fine tune algorithms without risk. As the models and safety systems improved, the research moved on to carefully monitored tests involving human participants. This work is ongoing, with continuous improvements to comfort, reliability and precision.
For some of the researchers, this project is not just about robotics or AI benchmarks. It has deep personal meaning. As their own family members age, the potential to apply medical robotics experience to real world care pushes them to design systems that are not only powerful but also safe, reliable and human centered.
The Moonshot team working on this goal will share more of their progress at the 2026 International Symposium on System Integration. For now, their work shows how technologies familiar from gaming rigs and AI workstations are expanding into a new frontier. The same GPU architecture that drives high frame rate games and 3D simulations is also training robots that could change how societies support their elderly in the decades ahead.
Original article and image: https://blogs.nvidia.com/blog/japan-science-technology-agency-develops-moonshot-robot/
