Every year, three million tonnes of pesticides are sprayed on agricultural land around the world. Only a fraction of these chemicals are actually needed. It is time to act! Many suppliers of sprayers, tractors and agricultural robots have recognised this. They are tackling the problem in a variety of ways. For example, sprayers are fitted with cameras and sensors. These take pictures of the ground and detect weeds. Pesticides are then applied only where necessary. Another approach works in a similar way, without using pesticides at all. Instead, agricultural robots pull out the weeds that are detected. Both methods help reduce pesticide use and increase yields.
This example shows that machines that see and understand represent a real opportunity - for example, when it comes to feeding the world's population. But how do you get machines to recognise weeds?
Edge intelligence is the technology of the moment. Data is processed using AI algorithms where it is generated - in the immediate vicinity of the sensors. Commonly used sensors include 2D and 3D stereo cameras, lidar or radar. The resulting data is processed in an inference process using a pre-trained neural network. An inference process is nothing more than a conclusion that the software independently draws from the collected data. The sensor data is analysed and evaluated at the edge. The neural network also uses the new data to learn new things - for example, to recognise new components in a production process.
This requires three components: software, sensors and an AI-enabled embedded system. The latter, as the heart of the hardware, instantly processes the data and uses it to make intelligent decisions. Today, technology from chip manufacturer NVIDIA is often used for such inference tasks. With the Jetson product family, NVIDIA offers SoMs (System on Modules) with different performance levels. These combine CPU and GPU technology. With their parallel processor structure, the SoMs are ideal for executing the software of autonomous machines and vehicles quickly and efficiently - and above all for processing data from multiple high-resolution sensors with virtually no latency.
Another reason to choose the NVIDIA Jetson platform is the developer kits. This allows companies to start their software development early and then seamlessly finish it on the production device. A large number of libraries and application-specific frameworks are available to reduce development effort. The SoMs are also compatible with ROS2 middleware, which has established itself as the ideal tool for computer vision applications. ROS2 is used to control and coordinate a large number of nodes. The middleware has a modular structure and includes functions for sensor processing and evaluation, as well as actuator control.
The latest system-on-module from NVIDIA is the Jetson AGX Orin. This has 275 TOPS of AI computing power and 64GB of RAM. The power consumption is scalable between 15 and 60 watts. The unique combination of powerful CPU and GPU with NIVIDIA Ampere architecture enables new computer vision applications across all industries. Typical applications include hazard detection, environmental perception, intelligent video analysis and the control of autonomous systems.
For NVIDIA’s AI technology to work in harsh environments, such as those found in a production machine or agricultural robot, specially designed hardware is required. There are a handful of vendors around the world that have taken on the task of making NVIDIA’s technology fit for such environments. These manufacturers offer NVIDIA Jetson-based embedded computers with IP67 protection. Syslogic, for example, combines the SoM (System on Module) with its own carrier board, an uncompromisingly robust housing and clever connector technology. The company’s rugged computers are passively cooled and suitable for the extended temperature range. The company also works with sensor manufacturers who have the same requirements for ruggedness and reliability.
As a result, the potential of AI can also be realised in harsh industrial applications. PoE or GMSL interfaces are used to connect sensors such as lidars, radars and cameras to the embedded systems in order to realise sophisticated computer vision applications. This opens up new opportunities for companies around the world.