Impressions of the European Microwave Conference 2008 (and Parkervision's "debutante" performance).

 

 

First of all, let's be clear about something. The European Microwave Conference is not in the same league as the US-based IEEE International Microwave Symposium (IMS). It has a long history, but has gone through some very thin times in the last decade. The trade exhibition adjunct at this year's conference in Amsterdam was about one quarter of the size, in terms of exhibitors and floor space, than the exhibition at IMS in Atlanta back in June this year. The conference, although running to four days and four parallel sessions, is diluted by a high percentage of student papers. The technical proceedings, in general, have a much higher academic content than IMS, and genuine industry-based R&D is a scarcity in the technical program. I believe EuMC has dropped down the general RF and microwave conference pecking-order list from the number 2 position they think they claim. It was however with some interest and anticipation that I took my seat in the large auditorium for the "Power Amplifier Linearisation Techniques" session on Wednesday afternoon, to reacquaint myself with the technical claims of Parkervision, thus far only available to me in the form of their controversial patents. Despite having had little direct interest (and no financial interest) in following the business fortunes of the company for the last 12 months, I did note that their website had issued a press release promising that the Amsterdam presentation would include recent measurements on "production chips".

 

I took my seat about 10 minutes before the session started, at which time various presenters for the session were at the front of the auditorium, meeting with the session Chairman and loading their material into the auditorium VA equipment. I noted with interest that there were only 4 out of 5 listed presenters, and indeed it became apparent that the Chairman was looking for the PV presenter. He asked on the PA system if the PV representative was anywhere in the auditorium, and sure enough Mr Rawlins appeared from the gloom and loaded his material, only then to disappear again. Strange behaviour I thought, not participating in the conference as such, and not sitting with the other presenters at the front as is the long established custom. By the time the session started there were about 60 people in the auditorium, although the impression was of a rather sparse audience due to the large size of the hall. The first 3 papers in the session were (to show some objectivity here!) distinctly unremarkable contributions on digital predistortion. I would point out, however, that all 3 of these papers featured spectral plots of the various transmitter configurations that were being described, showing the improvements being claimed using their various predistortion techniques. Transmitter papers should show what the transmitter transmits!

 

 

When the Parkervision paper was announced, Messrs. Rawlins and (I believe) Sorrell once again appeared from the back of the auditorium, walking down the long aisle in some kind of a covert military two-step; I rather had the feeling they had not even sat in the auditorium during the previous presentations. Possibly due to their minimal interaction with the Chair, the paper started off with some confusion, as the Chair introduced the speaker as David Sorrell, while Greg Rawlins made his way to the podium. So Rawlins quickly introduced himself and started his presentation. He spoke clearly and confidently, but after the statutory two or three introductory slides things seemed to slide downhill rather precipitously. For reasons which at the time I and many others around me could not imagine, Rawlins proceeded to give a physics lesson. He told us about the laws of thermodynamics, and carefully defined entropy. There were more slides with mugshots of various eminent historical physicists; the lesson continued with a rundown on Claude Shannon and his laws of communication theory, the part played in these by signal entropy, and so on. By this time the audience was getting restless, eyes were rolling, heads were shaking, and gestures of disbelief were visible all over the auditorium. Several people left, I think by this time the audience had dwindled to around 40. "Get on with it, mate", one sensed everyone was saying to themselves.

 

Well, he finally did get on with it, but had taken up so much of his allocated timeslot that he had to accelerate. He gave a very cursory description of the D2P system, but I should note he could have saved some time by not displaying an ostentatious "D2P" graphic between each slide.  To me and anyone else with a modicum of PA knowledge, the description, stripped away from the flowery (and frankly rather arrogant) physics window-dressing, indicated nothing more than a rudimentary outphasing system with additional bias controls. Quite why Rawlins chose to attempt to "blind us with science" in describing such a basic and well-known configuration, when the audience consists mainly of graduate students and their professors who themselves work on RFPAs, is beyond me. I suppose he was trying to convince us that D2P is so revolutionary that you need to go back to Shannon and Maxwell to understand it. This might be a piece of deception they can pull off with a non-technical audience, but here they were addressing specialists in the field and it went down like the proverbial lead balloon.

 

At this point I need to refer to the proceedings transcript in order to supplement my report; there was certainly not enough time or detail spent in the actual presentation to form much of a detailed technical opinion. D2P does seem to have evolved from its original conceptual form in the patents. For example, the key slide (Figure 3 in the proceedings transcript) is in effect an admission that the original MISO (Multiple Input Single Output) is not by itself a very good way of synthesizing a high level amplitude and phase modulated signal, since originally it used constant amplitude signals. The MISO, (shown in Figure 8) connects the two output transistors in parallel at the output, rather than through an outphasing combiner. With a parallel  interconnection, a differential phase between the two signals reduces the output voltage swing, and although this gives the required output amplitude control it also knocks the efficiency down. In my opinion this has always been a fundamental flaw in the MISO concept, which appears to be the core of just about all of Parkervisions D2P patents. So now they have quietly added an extra dimension, in the form of amplitude control. They claim that this is necessary in order to extend the dynamic range of the system, but in fact it was a necessary addition to the original D2P system due to a fundamental misunderstanding of the function of the Chireix combiner, with which they dispensed. The new feature of amplitude control is in effect a "patch" which had to be added when they discovered that the D2P of their early patents is fundamentally flawed, something I and other analysts pointed out at the time. But the addition of amplitude control, primarily in the form of bias tracking on the output transistors, adds a new drain of battery power in the form of a variable voltage power supply.

 

 

 The paper partitions the D2P system into 2 main blocks, the "Vector Signal Engine" (VSE) and the "Vector Power Amplifier" (VPA). The VPA block diagram indicates a complicated subsystem, containing both digital and RF elements. In particular, there is a block called the "digitally contolled power supply" (DCPS).  There is a photograph of a chip (Figure 6 in the proceedings) which was touted as being "in production" but in fact was admitted only to be the VPA portion, and to me it looked like there were still some elements missing for the full stipulated VPA functionality. In particular, it looks to me as though the output matching network is not integrated and has to be added externally. Although this is common practice in handset PA products, it is important to recognise that this matching network is included inside the PA module. Multiband operation cannot be implemented without physically changing this network, something which has important implications for the D2P concept in terms of the alleged BOM and cost benefits.

 

 The DCPS was not addressed. As already discussed, this has to provide a variable voltage supply to the output transistors, and has to do so in an efficient manner.  Such a "voltage converter" has been the subject of much ongoing research in the RFPA industry, to the extent that there are companies who have this element as their sole product (eg see websites for Quantant and Nujira). The problem for D2P is that in absorbing the well known, widely documented, and well patented concept of power supply tracking in pursuit of better efficiency, they are using a technique which can greatly improve the efficiency of any conventional PA, and is currently doing so in newer generations of RFPA products. Indeed, in comparing the efficiency plot in Figure 14 with existing commercial products, it is important to include the voltage supply tracking benefits, which most vendors refrain from doing in the full knowledge that the tracking power supply will itself consume more power.

 

Finally Rawlins reached the point we had all been waiting for, the results! But to everyone's astonishment, he flicked over these slides and went straight to his conclusions, which were full of self-congratulatory bullets citing the "Stellar" performance of D2P. He did dwell a little on some constellation plots, which allegedly showed compliant EVM performance. But the screen shots were too small to read (that applies also to the transcript on the proceedings CD) and appeared to replicate similar images which have been on their website for about 9 months. I have always had concerns about the way this data is presented; the constellation trajectories do not correspond to the stipulated regulatory digitally-modulated formats and I have to ask the question whether they are in fact generating the various formats correctly (see further comments below). Anyway, that was it, astonishingly he flicked over the all-important efficiency plots, much too fast for anyone to read the information on the plots. It seemed he really didn't want to show or discuss the central core efficiency advantage of D2P that Parkervision have been touting for over 2 years!

 

 

The Chairman asked if there were any questions. I went to the nearest microphone and asked, simply, if he would mind showing us the slide with the efficiency plots (OK, I prefaced the request with a tongue-in-cheek comment about the "physics lesson"). Rawlins stared at me as if I had pointed a gun at him. He asked me to identify myself, something which is not obligatory (and hardly ever done) in this conference. I deferred to the Chair, who indicated I should continue with my question. So with ill-disguised irritation, back he flipped to the slide in question.  It showed three plots, allegedly showing the efficiency of a 1 watt CDMA D2P system, as a function of output power. It was not clear whether this was peak or mean power. The first plot, with the highest efficiency, referred to the VPA itself, this is of course quite irrelevant since the VPA by itself is not able to produce a real signal. (I should note, however, that this is the curve that appears on the website data). The third plot (which to date has not to my knowledge appeared on the website) was clearly the one of most relevance, in that it included all of the extra power consumed by the rest of the system. I focussed on this plot and noted that the maximum efficiency was only just over 40%, a level quite comparable with typical conventional commercial handset PAs at maximum power, even when they are measured at a fixed supply voltage. In the spirit of "apples versus apples", I would consider that if you take a typical commercial handset PA product and allow yourself to track the voltage as the signal power is reduced, the efficiency curve would actually look considerably better than the D2P results.

 

But there are further caveats on the efficiency claims. One of these was picked up on by the next questioner, a former chief engineer at a large UK RF semiconductor manufacturer.  He asked (picking up on Rawlin's "stellar" comment), would the speaker please explain what was "stellar" about his results. Rawlins was noticeably non-plussed by the question, and basically re-stated that it was EVM performance. But the questioner pointed out that the paper gives no indication whatsoever as to what the distortion performance (ACP and EVM) the system displays at the plotted efficiency power levels. This is an absolute statutory requirement in any statement about PA efficiency; indeed if ACP and EVM levels are not being met, 40% is actually a rather low number. Rawlins gave a very strange answer to this question; he said that the efficiency corresponding to the EVM constellation plots was "fifty percent". The problem with this answer is that the only meaningful efficiency plot on his own slide did not show any power level at which a 50% efficiency was displayed! This particular response set my alarm bells ringing, and was the main area of doubtful commentary I picked up from some of the other attendees as the session broke up.

 

As the session broke up, I ran into an old colleague, whom I had not met for a few years and had not previously ever heard of Parkervision. He was highly amused, and said rather vociferously, "that was the biggest load of BS I've ever heard at a conference". In retrospect, I think this was possibly a bit unfair, especially given the distinctly mediocre quality of the previous papers in the session. But Rawlins certainly asked for such a response, and I think missed a golden opportunity to make a more positive impression. Although, on the one hand, it is understandable that a company does not wish to disclose any more of its "secrets" than is necessary, to come along to an International Conference and try to avoid presenting your results is at best daft, and inevitably raises doubts in peoples' minds about whether you are trying to hide something.

 

 

I have several ongoing concerns about D2P and the claims made for it by Parkervision. Perhaps in the context of a conference report, I will list these as if I had the opportunity to ask more questions after the presentation. So these are no more than the usual "loaded questions" that make for good conference interactions; they are questions, not accusations as such, but are serious issues that remain as yet unanswered:

 

 

1) The efficiency drops precipitously as the power is reduced. PV claim that the key benefit of D2P is it's efficiency performance at reduced power (hence, I guess, the entropy and thermodynamics lecture), and that D2P is therefore most applicable to systems which use power control (a typical CDMA handset operates most of the time at a level at least 10dB lower than the maximum). But the paper now makes it clear that D2P involves "tracking" of the bias supplies, most notably to the output devices. I see no indication in the block diagram of the necessary voltage converter which would be required to perform this task. I suspect in the presented results PV are using a bench power supply to perform this tracking, is this correct?  (If so, the efficiency and the BOM take another big hit in a final integrated version)

 

2)  Do the constellation plots represent a direct measurement of the transmitter chain output? (If so, the signals do not conform to regulatory standards, if not, why not; this is the standard method that the entire industry uses to demonstrate spectral conformance! Are you hiding something?!)

 

3)   Since D2P has now been shown to include power supply voltage tracking, is it not the case that the same tracking applied to a conventional commercial handset RFPA chip would actually give better efficiency performance than the D2P results presented in this paper? (There is a multitude of existing products and patents that describe voltage tracking methods to improve PA efficiency, indeed many commercial handsets already use this technique).

 

4)  The chip you show in this paper is clearly not a full D2P chip, and (by my interpretation) omits even some of the stipulated VPA functions. Can you therefore justify Parkervision's claims that they will be reporting  on a "production D2P chip? in this paper"

 

5)  You claim that D2P converts the raw IQ digital data streams into the final RF signal. Do you apply any predistortion to the IQ data in order to improve the linearity? Does this predistorted signal get generated in real time, or off-line using a PC running MATLAB? (MATLAB has advanced floating point and math function processing that would require a vast number of extra gates in the digital processor to implement in a handset).

 

Conclusions

 

D20 has clearly gone through some evolution since I reviewed some of the patents a year or so ago, although based on this paper they would appear to be some way yet from a fully integrated D2P chip. I use the word "evolution" somewhat euphemistically. The modifications appear to me to acknowledge the very flaws that several technical reviewers pointed out a year or more ago when the first D2P patents came out.. Clearly, as these flaws have slowly surfaced, "patches" have been deployed, and with each patch comes an efficiency drop. In a sense, maybe I can understand Mr Rawlins's reluctance to show the efficiency plot (Figure 14) which could be interpreted by an accountant or a program manager a "slip-chart" in performance as measured against earlier claims. I have to believe that as time progresses, some of the other flaws will emerge and require further remedial treatment and further slips.  So to conclude, I will summarize the various concerns I have always had about the D2P system, some of which appear to have been addressed, but others remain open to further "evolution":

 

(a)  The need for a digital processor (vigorously denied at first, now admitted; I expect to see the digital horsepower required from this processor to increase from its present modest estimated size).

 

(b)  The need to include the power consumption of the rest of the system in order to make objective efficiency claims (ongoing; this paper gives an update on efficiency which shows much lower numbers than the results previously posted on the website, which thus have to be categorized as misleading).

 

(c)  When (b) is applied, the efficiency of D2P looks "ordinary", and even at best not "stellar", and I'm not convinced that the full accounting has yet been done; how is the DCPS being implemented and what is its extra power drain?

 

(d)  The stipulation that removal of the power combiner from a standard outphasing PA architecture is beneficial. It is not. It reduces the efficiency at lower power levels and has forced PV to include some voltage supply tacking in order to maintain comparable efficiencies to existing products.  The first thing I would do to improve the efficiency performance of D2p would be to restore the Chireix combiner (but of course this would then be unpatentable). I feel that from Day 1 Parkervision's designers have not been able to grasp the basics of the Chireix outphasing PA. Although it is true that a combiner could not be integrated on-chip, this is clearly already the case with the output matching networks.

 

(e)  The use of external test instruments (variable power supplies, signal sources, PC's running math software) and other prototype "stand-ins" to replicate the various functions that are not yet integrated raises doubts about the validity of the claimed efficiencies. Until true full single chip integration has been demonstrated, I do not think the efficiency claims are any more than estimates.

 

(f)  The need (still unrealised) to demonstrate that true regulated signal formats are being generated by the system. It is insufficient, and highly inappropriate, to keep showing post-demodulation "false-trajectory" constellation plots. These are not indicative of the transmitter output, the "on-air" signal, which is subject to regulatory specifications. Even ACP, which is plotted in a very unconventional way (e.g. Figs. 12 and 13) , does not give the full information about the signal integrity when the signal is being generated from scratch in the system. The use of ACP as a measure of linearity is OK when the signal is being generated by an Agilent ESG, but here the claim is that the signals are being generated at RF from scratch. I remain very concerned as to what underlying reasons there are that have forced Parkervision to present this data in such an unconventional way.

 

(g)  The problem a single-chip D2P will have in dealing with multi-band operation. In this paper they now admit to fabricating different VPA chip designs for low and high band operation. The VPA contains driver stages which require RF matching networks and these are highly band-specific (the spiral inductors that are clearly visible in the chip image in Fig. 9 are merely part of such matching networks). So if D2P is to be a single integrated chip, different designs will be needed for each band. Just about any high end mobile phone will require multiband operation, sometimes as many as 4 bands. There will therefore be a probable increase in BOM cost and transmitter PCB "real estate" due to the need for a multi-chip D2P implementation in multiband applications. This appears to me to be a major disadvantage of the D2P approach, as compared to the conventional RFPA architecture, due to the much smaller size, cost, and complexity of a basic PFPA chip. There will also be a need to add multiband receiver chips to the D2P BOM if the conventional "Transceiver" chip is dispensed with. D2p does not look good for multiband operation.

 

 

Steve C Cripps - Nov. 2  2008.