How Neural Networks Think (MIT)


Source: MIT’s Computer Science and Artificial Intelligence Laboratory, David Alvarez-Melis and Tommi S. Jaakkola Technical paper link MIT article General-purpose neural net training Artificial-intelligence research has been transformed by machine-learning systems called neural networks, which learn how to perform tasks by analyzing huge volumes of training data, reminded MIT research... » read more

System Bits: Aug. 22


Bioimaging technique tracks multiple in vivo interactions To make it possible to quickly and economically monitor multiple molecular interactions in a large area of living tissue – such as an organ or a small animal — Rensselaer Polytechnic Institute researchers have created an approach to optical imaging that could have applications in medical diagnosis, guided surgery, or pre-clinical dr... » read more

Computer Vision Powers Startups, Bleeding Edge Processes


You can’t turn around these days without walking into a convolutional neural network…..oh wait, maybe not yet, but sometime in the not-too-distant future, we’ll be riding in vehicles controlled by them. While not a new concept, CNNs are finally making the big time, as evidenced by a significant upswell in startup activity, tracked by Chris Rowen, CEO of Cognite Ventures. According to h... » read more

System Bits: Aug. 8


Improving robot vision, virtual reality, self-driving cars In order to generate information-rich images and video frames that will enable robots to better navigate the world and understand certain aspects of their environment, such as object distance and surface texture, engineers at Stanford University and the University of California San Diego have developed a camera that generates 4D images... » read more

The Evolution Of Deep Learning For ADAS Applications


Embedded vision solutions will be a key enabler for making automobiles fully autonomous. Giving an automobile a set of eyes – in the form of multiple cameras and image sensors – is a first step, but it also will be critical for the automobile to interpret content from those images and react accordingly. To accomplish this, embedded vision processors must be hardware optimized for performanc... » read more

Leveraging The Power Of VDMA Engines For Computer Vision Apps


It's pretty hard to overestimate the role of heterogeneous embedded systems based on Xilinx Zynq-7000 All-Programmable devices in tasks like computer vision. Many consumer electronics and specialized devices are emerging to facilitate and improve industries such as medical, automotive, security, and IoT. The combination of high-performance ARM application processing and Xilinx programmable F... » read more

System Bits: March 21


Sensors vulnerable to sonic cyber attacks According to University of Michigan researchers, sound waves could be used to hack into critical sensors in a wide range of technologies including smartphones, automobiles, medical devices and IoT devices. New research calls into question the longstanding computer science tenet that software can automatically trust hardware sensors, which feed auton... » read more

Teaching Computers To See


Vision processing is emerging as a foundation technology for a number of high-growth applications, spurring a wave of intensive research to reduce power, improve performance, and push embedded vision into the mainstream to leverage economies of scale. What began as a relatively modest development effort has turned into an all-out race for a piece of this market, and for good reason. Mark... » read more

System Bits: Nov. 15


Revolutionizing sports via AI and computer vision A new technology developed by PlayfulVision — an EPFL startup — will be used in all NBA games in the United States starting next year to records all aspects of sporting events for subsequent analysis in augmented reality. Will artificial intelligence and computer vision revolutionize the sports industry? PlayfulVision’s approach uses ... » read more

System Bits: Nov. 8


Optimizing multiprocessor programs for non-experts While ‘dynamic programming’ is a technique that yields efficient solutions to computational problems in economics, genomic analysis, and other fields, adapting it to multicore chips requires a level of programming expertise that few economists and biologists have. But researchers from MIT’s Computer Science and Artificial Intelligence La... » read more

← Older posts