The vision and streaming video area seems to be ripe for continued investment and innovation. I’ve discussed a few applications lately that look promising. Here is news of an underlying technology that will boost these other applications.
eYs3D, a silicon-design-AI computer vision solutions company, introduced at CES a state-of-the-art computer vision development platform for next-generation autonomous robotic applications such as AIoT (artificial intelligence of things), smart city, indoor cleaning robots, and outdoor agricultural robots for both industrial and retail sectors.
XINK offers multiple benefits:
- Industry 4.0 application readiness, via high-speed communications and conformance to IEEE 1588 standards
- Effective power management, including with sleep and deep sleep modes for unused blocks, supporting always-on capability
- Superior computing performance through the quad-core Cortex A55 and the 4.6 TOPS NPU in the eCV1 chip, which provides dedicated machine learning instructions, a patented neural network engine, and Tensor Processing Fabric
- Highly flexible image and computer vision processing for domain-specific applications
The platform, called XINK, is both a platform-as-a-service (PaaS) as well as a hardware and software development kit that is a cost-effective solution for design of safe, vision-capable mobile AI products equipped with field analysis, object recognition, obstacle detection, object tracking and following, and route planning functions.
XINK provides all the necessary elements for product development, including high-performance compute power, AI accelerator, I/O controls and Flexi-bus communication peripherals, smart power management and machine vision subsystems. The modular XINK platform takes care of low-level programming, freeing developers to use cut-and-paste coding for application-specific design while reducing design cycles for quicker commercialization.
The platform has H.264 compression for video streaming as well as Imaging Signal Processing (ISP) support features. XINK accepts image data from either an external ISP such as eYs3D’s separate eSP87x series stereo video and depth processor, or from the ISP soft code inside the XINK CPU.
The edge AI processing is powered by eYs3D’s new eCV1 AI chip that incorporates four Core ARM 64-Bit CPUs and a 4.6 TOPS neural processor unit (NPU). An additional low power ARM Cortex M4 processor can be used as an MCU. The platform supports various AI inference tools including TensorFlow, TensorFlow Lite, PyTorch, Caffe, TVM and more.