… for newer posts please visit the new FPGA Site address:
On February 21st Xilinx announced new devices aimed for the design of solutions for 5G wireless systems. The announced RFSoC devices will combine existing MPSoC capabilities with integrated ADCs and DACs.
The integrated 16nm-based RF data conversion technology includes:
- Direct RF sampling
- 12-bit ADCs at up to 4GSPS, with digital down-conversion
- 14-bit DACs at up to 6.4GSPS, with digital up-conversion
Current solutions for SDR are typically based on superhet transceivers. This architecture needs:
- IF stage including LO
- High speed converters, typically needing fast SerDes (JESD204) to interconnect the processing FPGA to the ADC and DAC
Direct RF Sampling Receiver – Source: Xilinx
Xilinx proposed architecture with integrated DAC and ADC as well as direct RF sampling simplifies and enhances the SDR solution implementation:
- Reduced noise
- Reduced power consumption
- Reduced PCB size and routing complexity
As of the date of this article, there is no public information regarding availability dates and/or device types for the new RFSoCs.
by Bernard Murphy
Deep learning based on neural nets and many other types of machine learning have amazed us with their ability to mimic or exceed human abilities in recognizing features in images, speech and text. That leads us to imagine revolutions in how we interact with the electronic and physical worlds in home automation, autonomous driving, medical aid and many more domains.
But there’s one small nagging problem. What do we do when it doesn’t work correctly (or even more troubling, how do we know when it’s not working correctly)? What do we do when we have to provide assurances, possibly backed up by assumption of liability, that it will work according to some legally acceptable requirement? In many of these methods, most notably the deep learning approaches, the mechanisms for recognition can no longer be traced. Just as in the brain, recognition is a distributed function and “bugs” are not necessarily easy to isolate; these systems are effectively black-boxes. But unless we imagine that the systems be build will be incapable of error, we will have to find ways to manage the possibility of bugs.
The brain, on which neural nets are loosely modeled, has the same black-box characteristic and can go wrong subtly or quite spectacularly. Around that possibility has grown a family of disciplines in neuroscience, notably neuropathology and psychiatry to understand and manage unexpected behaviors. Should we be planning similar diagnostic and curative disciplines around AI? Might your autonomous car need a therapist?
A recent article in Nature details some of the implications and work in this area. First, imagine a deep learning system used to diagnose breast cancer. It returns a positive for cancer in a patient but there’s no easy way to review why it came to that conclusion, short of an experienced doctor repeating the analysis, which undermines the value of the AI. Yet taking the AI conclusion on trust may lead to radical surgery where none was required. At the same time, accumulating confidence in AI versus medical experts in this domain will take time and raises difficult ethical problems. It is difficult to see AI systems getting any easier treatment in FDA trials than is expected for pharmaceuticals and other medical aids. And if after approval certain decisions must be defended against class-action charges, how can blackbox decisions be judged?
One approach to better understanding has been to start with a pre-trained network in which you tweak individual neurons and observe changes in response, in an attempt to characterize what triggers recognition. This has provided some insight into top-level loci for major feature recognition. However other experiments have shown that trained networks can recognize features in random noise or in abstract patterns. I have mentioned this before – we humans have the same weakness, known as pareidolia, a predisposition to recognize familiar objects where they don’t exist.
This weakness suggests that, at least in some contexts, AI needs to be able to defend the decisions to which it comes so that human monitors can test for weak spots in the defense. Which shouldn’t really be a surprise. How many of us would be prepared to go along with an important decision made by someone we don’t know, supported only by “Trust me, I know what I’m doing”. To enable confidence building in experts and non-experts, work is already progressing on AI methods which are able to explain their reasoning. Put another way, training cannot be the end of the game for an intelligent system, any more than it is for us; explanation and defense should continue to be available in deployment, at least on an as-needed basis.
This does not imply that deep learning has no place. But it does suggest that it may need to be complemented by other forms of AI, particularly in critical contexts. The article mentions an example of an AI system rejecting an application for a bank loan, since this is already quite likely a candidate for deep learning (remember robot-approved home mortgages). Laws in many countries require that an adequate explanation be given for a rejection. “The AI system rejected you, I don’t know why” will not be considered legally acceptable. Deep learning complemented by a system that can present and defend an argument might be the solution. Meantime perhaps we should be adding psychotherapy training to course requirements for IT specialists, to help them manage the neuroses of all these deep learning systems we are building.
You can read the Nature article HERE.
Motion planning is determining how a robot should move to achieve a goal; for example, a to move the arm of a robot to a desired destination avoiding collisions with any obstacles.
The VHDL code snippets list has two new additions:
Saturation counter, and
For each counter, source codes with explanations are provided, as well as test-bench, Modelsim project, waveform .do file, screenshots, etc.
All the files are released under GitHub.
Click here the complete list of VHDL code snippets.
From now on, projects released on FPGA SIte will be available on GitHub. In this way, I hope that the projects will be better mantained (with versioning) and in a standardized format.
For any project, the following directories will be included under GitHub:
- src – VHDL source files
- ip – Special VHDL files generated by Intel/Altera IP Wizard
- sim – Simulation files for Modelsim Altera (including .mpf project file and wave.do – waveform generating do file)
- wfm – Screenshots of waveforms from simulation runs
- tb – VHDL (and other) files for test-benching
The tree for the projects can be seen here
by Bernard Murphy(*)
In Douglas Adams’ iconic series A hitchhiker’s guide to the galaxy a super-intelligent species created a super-powerful computer called Deep Thought to answer the ultimate question – what is the meaning of life (and the universe and everything)?
Life imitates art so it should come as no surprise that a team in London founded a venture in 2010 called (I’m sure intentionally) DeepMind. The company was acquired by Google in 2014.
One of the reasons for the explosive growth of IoT is that embedded devices with networking capabilities and sensor interfaces are cheap enough to deploy them at a plethora of locations.
However, network bandwidth is limited. Not only that, but also, the latency of the network can be of seconds or minutes. By the time the sensor data is acquired by the centralized computers, its value for decision making could be lost. In other words, for the IoT solution to be effective, it should not only deliver meaningful data securely (and filter it as much as possible to avoid network congestion), it should also analyze it and act upon it at the origination point of the data. At the very edge of the network.
On the previous entries of this series we already commented about:
In this third part of the series (as promised), we will show how to implement the timers block by using, not registers, but memory blocks.
Memory blocks are an often unused capability of modern FPGAs and can in many cases (as in this one) be a nice alternative to save on scarce resources like registers and LUTs. As we commented in the previous entry, implementing a block of 32 x 16 bit timers took about 7% of the LUTs of a Cyclone, and we wanted to see if we can reduce the quantity of resources taken.
For some months now I have been telling to anyone who was willing to hear that in about ten years we will have autonomous cars in all the streets of our cities. In one hand, I think that that fact may change the way we see and plan cities forever, mainly for good (will we have narrowed streets? Will we recycle many of our parking lots for much needed green spaces?).