Xilinx announces new RFSoC devices

On February 21st Xilinx announced new devices aimed for the design of solutions for 5G wireless systems. The announced RFSoC devices will combine existing MPSoC capabilities with integrated ADCs and DACs.

The integrated 16nm-based RF data conversion technology includes:

  • Direct RF sampling
  • 12-bit ADCs at up to 4GSPS, with digital down-conversion
  • 14-bit DACs at up to 6.4GSPS, with digital up-conversion

Current solutions for SDR are typically based on superhet transceivers. This architecture needs:

  • IF stage including LO
  • High speed converters, typically needing fast SerDes (JESD204) to interconnect the processing FPGA to the ADC and DAC

Direct RF Sampling Receiver – Source: Xilinx

Xilinx proposed architecture with integrated DAC and ADC as well as direct RF sampling simplifies and enhances the SDR solution implementation:

  • Reduced noise
  • Reduced power consumption
  • Reduced PCB size and routing complexity

As of the date of this article, there is no public information regarding availability dates and/or device types for the new RFSoCs.

AI and the black box problem

by Bernard Murphy lucy-peanuts-min

Deep learning based on neural nets and many other types of machine learning have amazed us with their ability to mimic or exceed human abilities in recognizing features in images, speech and text. That leads us to imagine revolutions in how we interact with the electronic and physical worlds in home automation, autonomous driving, medical aid and many more domains.

But there’s one small nagging problem. What do we do when it doesn’t work correctly (or even more troubling, how do we know when it’s not working correctly)? What do we do when we have to provide assurances, possibly backed up by assumption of liability, that it will work according to some legally acceptable requirement? In many of these methods, most notably the deep learning approaches, the mechanisms for recognition can no longer be traced. Just as in the brain, recognition is a distributed function and “bugs” are not necessarily easy to isolate; these systems are effectively black-boxes. But unless we imagine that the systems be build will be incapable of error, we will have to find ways to manage the possibility of bugs.

The brain, on which neural nets are loosely modeled, has the same black-box characteristic and can go wrong subtly or quite spectacularly. Around that possibility has grown a family of disciplines in neuroscience, notably neuropathology and psychiatry to understand and manage unexpected behaviors. Should we be planning similar diagnostic and curative disciplines around AI? Might your autonomous car need a therapist?

A recent article in Nature details some of the implications and work in this area. First, imagine a deep learning system used to diagnose breast cancer. It returns a positive for cancer in a patient but there’s no easy way to review why it came to that conclusion, short of an experienced doctor repeating the analysis, which undermines the value of the AI. Yet taking the AI conclusion on trust may lead to radical surgery where none was required. At the same time, accumulating confidence in AI versus medical experts in this domain will take time and raises difficult ethical problems. It is difficult to see AI systems getting any easier treatment in FDA trials than is expected for pharmaceuticals and other medical aids. And if after approval certain decisions must be defended against class-action charges, how can blackbox decisions be judged?

One approach to better understanding has been to start with a pre-trained network in which you tweak individual neurons and observe changes in response, in an attempt to characterize what triggers recognition. This has provided some insight into top-level loci for major feature recognition. However other experiments have shown that trained networks can recognize features in random noise or in abstract patterns. I have mentioned this before – we humans have the same weakness, known as pareidolia, a predisposition to recognize familiar objects where they don’t exist.

This weakness suggests that, at least in some contexts, AI needs to be able to defend the decisions to which it comes so that human monitors can test for weak spots in the defense. Which shouldn’t really be a surprise. How many of us would be prepared to go along with an important decision made by someone we don’t know, supported only by “Trust me, I know what I’m doing”. To enable confidence building in experts and non-experts, work is already progressing on AI methods which are able to explain their reasoning. Put another way, training cannot be the end of the game for an intelligent system, any more than it is for us; explanation and defense should continue to be available in deployment, at least on an as-needed basis.

This does not imply that deep learning has no place. But it does suggest that it may need to be complemented by other forms of AI, particularly in critical contexts. The article mentions an example of an AI system rejecting an application for a bank loan, since this is already quite likely a candidate for deep learning (remember robot-approved home mortgages). Laws in many countries require that an adequate explanation be given for a rejection. “The AI system rejected you, I don’t know why” will not be considered legally acceptable. Deep learning complemented by a system that can present and defend an argument might be the solution. Meantime perhaps we should be adding psychotherapy training to course requirements for IT specialists, to help them manage the neuroses of all these deep learning systems we are building.

You can read the Nature article HERE.

SoC FPGA for IoT Edge Computing

Edge architecture from Fujisoft presented at ISDF 2016

One of the reasons for the explosive growth of IoT is that embedded devices with networking capabilities and sensor interfaces are cheap enough to deploy them at a plethora of locations.

However, network bandwidth is limited. Not only that, but also, the latency of the network can be of seconds or minutes. By the time the sensor data is acquired by the centralized computers, its value for decision making could be lost. In other words, for the IoT solution to be effective, it should not only deliver meaningful data securely (and filter it as much as possible to avoid network congestion), it should also analyze it and act upon it at the origination point of the data. At the very edge of the network.

Continue reading “SoC FPGA for IoT Edge Computing”

FPGAs and Deep Machine Learning


The concept of machine learning is not new. Attempts at systems emulating intelligent behavior, like expert systems, go as far back as the early 1980’s. And the very notion of modern Artificial Intelligence has a long history. The name itself was coined at a Dartmouth College conference (1956), but the idea of an “electronic brain” was born together with the development of modern computers. AI as an idea accompanies us from the dawn of human history.

Three latest development are pushing forward “Machine Learning”:

  • Powerful distributed processors
  • Cheap and high volume storage
  • High bandwidth interconnection to bring the data to the processors

Continue reading “FPGAs and Deep Machine Learning”

TI power solution for Arria 10 GX


“Field programmable gate arrays (FPGAs) are increasingly complex system on chips (SoCs) that include not just programmable logic gates and random access memory (RAM) but also analog-to-digital converters (ADCs); digital-to-analog converters (DACs); and programmable analog features and signal-conditioning circuits that enable high-performance digital computations in servers, network-attached storage (NAS), enterprise switches, oscilloscopes, network analyzers, test equipment and software-defined radios.”

Continue reading “TI power solution for Arria 10 GX”

Linear Power Solutions for FPGAs

Altera Arria 10 Evaluation board – Source: Linear

Modern FPGA devices are quite complex machines. They include support for several type of I/Os at different voltages (LVCMOS, LVDS, SSTL, etc). Also, the FPGA core usually works at low voltages of around 1.0V, but at quite high currents of several amperes. Additionally, power sequencing requirements must be met.

Continue reading “Linear Power Solutions for FPGAs”

Lattice Crosslink devices bridge the gap for VR solutions


“The pieces are falling into place for the Virtual Reality (VR) market. As designers move to higher bandwidth designs, integrate higher resolution displays, reduce system latency, and improve gesture and head tracking, they are beginning to deliver truly immersive experiences to VR users”

Continue reading “Lattice Crosslink devices bridge the gap for VR solutions”

Stratix 10MX – High memory bandwidth on SiP package


The Stratix® 10 MX DRAM system-in-package (SiP) family combines a 1 GHz high-performance monolithic FPGA fabric, state-of-the-art Intel Embedded Multi-die Interconnect Bridge (EMIB) technology, and High Bandwidth Memory 2 (HBM2), all in a single package (SiP).

Image source: Altera

Continue reading “Stratix 10MX – High memory bandwidth on SiP package”

Internet of Things (IoT) – Overview whitepaper


Understanding the Issues and Challenges of a More Connected World

by Karen Rose
Senior Director, Strategy & Analysis

Reproduced from the Internet Society

Promising to transform the ways we live, work, and play, the Internet of Things (IoT) offers impressive benefits but presents significant challenges.

Continue reading “Internet of Things (IoT) – Overview whitepaper”