вход по аккаунту



код для вставкиСкачать
An Interview With Semiconductor
Pioneer and EDA Visionary Leader
Wally Rhines
Magdy Abadir
Abadir and Associates
Magdy Abadir, IEEE Design & Test’s
Interviews Editor, spoke with Mentor
Graphics’ Dr. Walden C. Rhines during an
in-depth interview in Austin, TX, USA, on
June 7, 2016.
Walden C. Rhines is Chairman and Chief Executive
Officer of Mentor Graphics, a leader in worldwide
electronic design automation with revenue of about
$1.2 billion in 2015. During
his tenure at Mentor Graphics, revenue has nearly
quadrupled and Mentor has
grown the industry’s number one market share solutions in four of the ten largest product segments of
the EDA industry.
Prior to joining Mentor Graphics, Rhines was
Executive Vice President of Texas Instruments’
Semiconductor Group, sharing responsibility for TI’s
Components Sector, and having direct responsibility for the entire semiconductor business with more
than $5 billion of revenue and over 30 000 people.
During his 21 years at TI, Rhines managed TI’s
thrust into digital signal processing and supervised
that business from inception with the TMS 320 family
of DSP’s through growth to become the cornerstone
of TI’s semiconductor technology. He also supervised the development of the first TI speech synthesis
devices (used in “Speak & Spell”) and is coinventor of
Digital Object Identifier 10.1109/MDAT.2016.2623665
Date of current version: 10 January 2017.
January/February 2017
the GaN blue-violet light emitting diode (now important for DVD players and low energy lighting). He was
President of TI’s Data Systems Group and held numerous other semiconductor executive management
Rhines has served five terms as Chairman of the
Electronic Design Automation Consortium and is
currently serving as a director. He is also a board
member of the Semiconductor Research Corporation
and First Growth Children & Family Charities. He has
previously served as chairman of the Semiconductor
Technical Advisory Committee of the Department of
Commerce and as a board member of the Computer
and Business Equipment Manufacturers’ Association
(CBEMA), SEMI-Sematech/SISA, Electronic Design
Automation Consortium (EDAC), University of
Michigan National Advisory Council, Lewis and Clark
College and SEMATECH.
Dr. Rhines holds a Bachelor of Science degree
in metallurgical engineering from the University of
Michigan, a Master of Science and PhD in materials
science and engineering from Stanford University,
a Master of Business Administration from Southern
Methodist University and an Honorary Doctor of
Technology degree from Nottingham Trent University.
Magdy Abadir: Our guest today is Dr. Wally
Rhines, the chairman and CEO of Mentor Graphics,
and a person who doesn’t need much introduction
given his status in the field of EDA. We’ll start with a
simple question: at heart, are you a technologist or
an entrepreneur?
Walden Rhines: I guess I’m a technologist in
that it’s an avocation for me, although I’m a businessperson by nature. But the technology has always
Copublished by the IEEE CEDA, IEEE CASS, IEEE SSCS, and TTTC
2168-2356/17 © 2017 IEEE
been fascinating, so I’ve been able to stay close to it
and still run a business.
Abadir: Let’s talk a little about your background
to begin this interview. We know you had a long, distinguished career in Texas Instruments before joining Mentor Graphics; you have a PhD as well as an
MBA; and last year you received the Phil Kaufman
Award, something that every EDA person dreams
of. You’ve been with TI for many years, and you
obviously drove and built up their DSP component
business. Moreover, you’re also an inventor, I understand; you invented some type of diode that’s used
in lots of applications today. So tell us about how
that part of your life made you who you are today.
Rhines: I started in engineering mostly because
I didn’t know what I wanted to do. My father was
an engineering professor, and it seemed that if you
didn’t know what you wanted to do, engineering
was probably as versatile as anything else. Having
grown up in a college town—Gainesville, Florida—I
hardly knew anyone who didn’t have at least one
PhD parent, so the thought didn’t really occur to me
that you would go into a field and not get either a
PhD, an MD, a JD, or something like that. So when
I graduated from the University of Michigan, I just
applied to a couple grad schools— Stanford and
Berkeley—and chose Stanford, where I got involved
in electronics because Stanford was a hotbed of the
semiconductor industry even then.
Now, this was 1968–1972, and one of the people on my thesis committee was Craig Barrett, who
became CEO of Intel, although at that time he was
not into electronics at all. I got increasingly engaged
in the electronics field, so by the time I finished graduate school I interviewed principally for jobs in the
semiconductor industry. One of the other members
of my committee, Gerald Pearson, had been one of
Morris Chang’s advisors when Morris got his PhD,
and Gerald said that, when it came time for me to
get a job, “I’ll take care of everything.” He just called
Morris and so I had lots of offers from around TI. I
interviewed at other places, but it just seemed that TI
was the largest semiconductor company and Texas
itself was growing rapidly, so I figured I needed to go
where I could work around people who were experienced, as opposed to going to a startup where I was
the most experienced at whatever we were doing.
Abadir: Then you got into the DSP business.
Rhines: Yes. TI was a decentralized corporation
with numerous small businesses. If you did well, you
either grew your business bigger or got another business that was bigger. I had a lot of opportunities to get
a variety of experience, including being engineering
manager and design manager for a fairly large group.
I was engineering manager for consumer products
during the period we did all the calculators, and
Speak & Spell, and things like that. Those projects
had very little design automation and lots of design,
30 chips a year typically, all of which had to be available at the Consumer Electronics Show in January or
else you killed the product.
Abadir: So why, at that time, did you then leave
DSP and go to work in EDA?
Rhines: That was a much later decision. The
DSP work resulted from the fact that I was living in
Lubbock, Texas, which I didn’t really care for. Also,
TI was in terrible trouble in microprocessors because
Motorola and Intel had just introduced the 68K and
the 8086, and TI had a 16-b microprocessor with only
a 16-b logical address space, so it wasn’t very salable and nobody wanted the job. I was fairly young,
under 30, for that kind of job, but this was an opportunity to leave Lubbock and go to Houston, which I
thought was a lot better.
When I got to Houston, the situation at TI was
quite desperate. It was clear that we had missed the
boat on 16-b host microprocessors, so we racked
our brains for what should come after host microprocessors and said, well, “there ought to be application-specific microprocessors,” and we did four
products: one for communications, a digital-signal
processor, one for graphics, and the TMS340 (it’s
320, 340, 360, and 380). The 360 was a mass-storage
chip that never went very far, and the 380 was the
token-ring LAN for IBM. The DSP far outshone all the
other products, so it grew, and it teaches a lesson:
desperation is the mother of innovation.
I had two advantages. One was that we had no
other path. There was no way we could do what our
competitors did because we had lost that round, so
that forced us to do something different. But the other
thing is, TI’s management had given up hope, so they
basically left us alone and said “do something significant.” They had very little expectation, however,
which is good because these things never happen as
fast as you think they will. And while we were able to
design the 320 DSP in a couple years, the real revenue didn’t materialize for five or six more years. In a
normal circumstance, TI would have been frustrated
and gotten a new manager, but in my case they didn’t
IEEE Design&Test
expect much anyway, so they were happy that this
had turned into such a successful product.
Now, getting back to your question about my
getting into EDA: that happened quite a bit later.
The DSPs were a big success, and I ended up being
responsible for the semiconductor business, which
I did for quite a while. Then I moved to Austin and
ran the data systems group for four years, which was a
computer business and a different experience. Then
I went back to Dallas to run the semiconductor business, and the CEO of the company, Jerry Junkins, said,
“I imagine you have greater ambitions and I’m going
to be in this job for at least ten more years, so I won’t
stand in your way,” but of course I had a noncompete
agreement. I couldn’t work in the semiconductor
industry, so I had to look elsewhere, and that’s how
I ended up in EDA.
Abadir: Now, in addition, of course, to leading
Mentor Graphics, you’re also very active in a number
of consortia, like the Electronic Design Automation
Consortium [EDAC], and the SRC board and many
other organizations. What motivates you to do these
things, which can take up so much time?
Rhines: I think, as most people do, that there’s a
certain amount of your time you should be giving back
to grow the greater industry or to causes other than
just the advancement of your own career or your own
company. The things we do with the EDA Consortium,
which is now called the Electronic System Design Alliance, are really for the benefit of the industry, things
that individual companies can’t do by themselves.
Semiconductor Research Corporation board—a little
different, but once again in the same spirit: it’s there to
fund research in universities and “create” the students
we hire and the professors that we depend upon. I do
some other charitable things, but the truth of the matter is that I enjoy the diversity of communication, working with all these different people, and it’s very helpful
from a technology point of view.
On the SRC board, I’m exposed to a level of
technology that is far more advanced than what
I would see in an average workweek, because they’re
funding things that are further out. In the Electronic
System Design Alliance, we’re dealing with business
problems, typically—piracy, regulations on exports,
things like that—that are common challenges for
the industry and which no one company can do
by itself.
Abadir: I’m familiar with what used to be called
the EDA Market Statistics; do you recall?
January/February 2017
Rhines: Oh, the Market Statistics Program, yes.
I’m still the board member assigned to the Market
Statistics Program. And yes, I think it’s very valuable
for startups, because they need a credible source
to show how well they’re doing in the market and
how big the market opportunity is. It’s useful for the
big companies because our own people need some
feedback that says “this is what’s happening in the
industry.” Your product categories are either growing or not growing. You should understand why, and
you should also understand whether you’re falling
behind or moving ahead.
Abadir: Let’s switch gears a little bit and talk
about a subject that you spoke on in the past, which
is EDA growth and its economics. Historically, EDA
has been tied to the semiconductor business, and the
percentage that semiconductor companies spend on
EDA has typically been around 2%, maybe less. Since
the semiconductor industry itself is not growing by
leaps and bounds, the EDA industry is suffering from
that low level of investment. One of my thoughts is,
can we get EDA to tie into systems companies which
make a lot more money. Is this happening now? Do
you see signs that it’s starting to happen?
Rhines: You’re unique in the sense that you have
been the setter and spender of those budgets for EDA
for many years, so you’ve seen the other side of it.
And you’re correct: today, EDA revenue is very stable
at 2% of semiconductor revenue, and if the semiconductor industry doesn’t grow, it’s difficult for the IC
design-related parts of the EDA industry to grow. But,
as you note, system design offers a whole new growth
opportunity for EDA. At Mentor Graphics, 50% of the
revenue comes from systems companies, and from
a product perspective, over a third of our revenue is
system design products—printed circuit boards, electrical wiring for cars, planes, and trains, embedded
software—all sorts of things that are system design
tools rather than semiconductor design tools. Our
growth rate is a mixture of a very fast-growing system
business and a very slow-growing EDA business.
Abadir: Is that system portion taken out of the
numbers when EDAC calculates the percentage?
Rhines: Well, yes and no. PCB design, for example, is part of the EDAC numbers. Wiring harness software has not traditionally been included because
Mentor’s the only major provider in the EDA industry. So the revenue growth percentages are different
for different things. Thermal analysis, for example,
and computational fluid dynamics are significant
contributors to Mentor’s system design revenue, but
they’re not counted in EDAC numbers.
Abadir: So you’re saying there are good signs for
EDA, especially within Mentor, where you have a lot
more product diversity than other companies.
Rhines: Yes, and historically, EDA grew much
faster than the semiconductor industry because not
everyone had adopted automation, so we were still
filling the seats with more and more software per person. Then we reached a point where everyone had
100% of their design flow automated, and it settled
out. We’re now, almost 20 years later, growing at
roughly the same rate as the semiconductor industry.
Abadir: Do you think that this 2% is fair? You
told me once that I was responsible for trying to keep
that EDA number down.
Rhines: No, that was your job: to get as much
EDA capability as possible for as little money as possible. What I find interesting is that EDA spend has stabilized and for 20 years it’s been 2% of semiconductor
revenue. I sometimes kid our sales force, “What value
are you providing if customers spend 2% of revenue
every year and you are unable to convince them they
should spend more?” The other side of that is it’s a
dynamic: if you look at the supply chain for the semiconductor industry, there’s a certain amount spent
on capital equipment. There’s a certain amount spent
on design software, on test equipment, on all these
things, and all of those suppliers have to reduce their
share of the cost bundle so that the semiconductor
manufacturers can reduce their cost per transistor by
25%–30% per year. If you look at EDA software and
look at the TAM—the total available market—for
the EDA industry, the EDA software cost per transistor is on the same downward learning curve that the
semiconductor industry is on, exactly parallel. And
if it weren’t, EDA would become an unmanageable
percentage of semiconductor cost.
Abadir: Let’s now consider more technologyrelated challenges that succeeded with EDA. Let’s
start with PD [physical design] verification and
Caliber, where you obviously excel, and an area
that’s clearly under tremendous technology challenges every time you switch nodes. I attended a
panel yesterday where they said that they almost
rewrite the code every time they move nodes,
because the underlying physics change tremendously that it has to be rewritten.
Do you see this continuing?
Rhines: It’s true that Mentor has
particular areas of strength. Gary Smith,
now deceased but sort of a dean of analysis for our industry, divided up EDA
into roughly 60 product segments—
things like place and route, PCB design,
and so on. If you look at the market
share of the leading supplier in each
of these 70 segments, it’s a remarkable
70% market share (and as you noted,
Mentor’s the leading provider for
physical verification, and resolution
enhancement, particularly design for
test and printed circuit board design).
What this says is that, on average, whoever is number one in a given segment has almost three-fourths of
that market. That means you can invest more. It means
the customers come to you first with their problems; it
means the communication is good. So it’s reinforcing:
it tends to last for a long, long time, and the only time
you ever see a switch is if there is a major technical
discontinuity that causes some change in the way we
design, and that’s pretty rare.
So Mentor, it’s true, has focused on the areas where
we can be number one, and 90% of our revenue
comes from areas where we have the largest market
share or where we’re the primary contender for the
largest market share, and we don’t put a lot of effort
into other areas. We do put a lot of effort into ensuring
that our tools can be well integrated into a Synopsys
flow, a Cadence flow, a Mentor flow so that a customer
doesn’t have to buy the whole enchilada from one
seller. We’ll sell a point tool. We’ll sell a subflow. We’ll
sell a full flow, and it’s really up to our customers, and
that approach is a little different from other companies.
As for the other thing you mentioned—“gee,
for a market that’s not growing very fast, that’s an
IEEE Design&Test
enormous amount of work, to rewrite all your software for almost every node”—it’s not quite that bad.
On average, maybe for every couple nodes there’s a
complete rewrite —but it is an enormous amount of
work for a low amount of revenue growth. You asked
can that continue, and the answer is—well, as long
as the technology changes, it has to continue or you
risk the possibility that your winners become losers.
So for the foreseeable future, yes, that will continue.
Abadir: It seems that the technology train keeps
moving, so far that nobody can see where it’s actually going, but nobody seems to be trying to stop it.
Rhines: Every node has new problems. Those
new problems require new types of analysis. They
require enhancements to existing tools, or sometimes they require new tools. Most of the EDA growth
is from solving new problems, not from simply
upgrading older tools to work on new technology.
Abadir: You mentioned test and DFT, which is
something I’m pretty familiar with.
Rhines: Yes, you were a pioneer with that and
one of our very early test customers. Motorola probably was the first large company to broadly deploy
Abadir: We used it quite a bit when it was still
in the early stages, too. So test has some unique
challenges. The cost of test isn’t dropping with semiconductor cost; instead, test seems to be one of the
factors that is in fact increasing.
Rhines: In the late 1990s, it was clear, when measured on a test-cost-per-transistor basis, that test costs
were not staying on the learning curve; they were not
decreasing as fast as they should. Pat Gelsinger, who
was CTO of Intel at that time, gave the keynote at the
1999 International Test Conference and speculated
that in the future it might cost more to test a transistor
than to manufacture it. It became apparent that there
needed to be an innovation, or test costs would go out
of control. And with that, a very brilliant guy, Janusz
Rajski, developed what’s referred to as test compression—that is, compressing the test patterns to utilize
the unused pattern memory in a tester to reduce the
test time and, therefore, the test cost.
Initially, test compression was about a 10x improvement, but it went to 100x, 500x, and now it’s approaching 1000x. If that innovation had not occurred, then
the semiconductor industry would have spent another
$25 billion on testers in 2012. But because it did
occur, test stayed largely on the learning curve. There
will be continuing challenges in the future. One such
January/February 2017
challenge is quality: as we got down to parts per
million defective that were approaching very low levels, the need emerged for doing transistor- level tests,
and that had to be (and it was) developed.
Speaking of brilliant, there’s a group in Hamburg,
Germany, that developed what’s called cell- aware
tests to look at things within a standard cell that aren’t
tested by automatic test pattern generation. That’s yet
another example of what I think the net result will be,
which is that we will find a way to do the test we need
to do, but in the future it may be through innovations
in software more than in hardware.
Abadir: Could you say something about built-inself-test—how that addresses some of these cost
issues and why it has not replaced scan?
Rhines: Mentor attempted to compete in
built-in- self-test for many years ourselves, and ultimately we bought the leader, LogicVision, who had
the largest market share. That turned out to be quite
a successful acquisition because, while memory
built-in-self-test was fairly obvious and was needed,
as more products incorporated embedded memory,
and it became necessary to do memory self-test. The
real challenge was logic, and logic BIST was very
slow to take off. What’s happened in recent years is
a drive toward logic BIST, partly driven by the system developers, and it’s with Logic BIST that the 85%
market share of Logic Vision became most valuable
for Mentor. Applications like automotive electronics,
for example, now have requirements from ISO 26262
[Functional Safety Standard] stipulating that the vehicle electronics must verify itself on startup. The only
way you can do that reasonably is with built-in-selftest. All sorts of system products need to do dynamic
testing to assure their integrity, safety, and performance, and I think built-in-self-test will continue to
grow. Right now, although it’s growing from a much
smaller base, it’s growing faster than scan-based tests.
Abadir: Let’s talk now about design verification,
something you have frequently addressed. Designers
have access to a lot of technologies to aid them in
verification—simulation, formal verification, emulation, and so on—yet it doesn’t seem to lead to a
definitive sort of closure. Nobody knows when to
stop verifying. How do you address that in Mentor?
Rhines: You can almost never do too much verification, but the problem is there’s only so much time,
and you have to go to market. So compromises are
made but even with those compromises, the amount
of verification increases, as the square of greater than
the number of gates in the design. Even if you’re putting together preverified logic blocks, the complexity of verification increases.
There’re really two modes of attack that our
industry has pursued. One is to do the verification faster so you can do more verification. We, of
course, have benefitted from the speedup of general-purpose computers, but we’ve also gained speed
by optimizing our simulators to simply do more tests.
The second way to attack the problem is emulation. Recently, emulation has extended to much more
than just graphics chips; now it’s also spread to networking chips, and almost any big CPU now requires
emulation to multiply the amount of verification you
can do by about 1000x. Therefore, we have to allow
for much more thorough verification and allow for
verification of embedded software. Ultimately, the
combination of simulation, emulation, FPGA card
verification, and validation tools will continue to grow
just because the complexity of designs will grow. But
we still are reasonably successful at producing designs
that function at their first pass, or after a cleanup pass.
If we look at the other factor of improvement, it’s
to do smarter verification instead of just doing more
verification cycles, and there’re lots of ways that happens. One of the principal ones is formal verification.
There is a wide variety of what we call push-button
formal types of verification that are done today.
There’re also other techniques, like assertions, that
allow you to verify things that you otherwise couldn’t
verify easily with traditional event-driven simulation.
Another factor is the integration of all these verification techniques, so that you can measure the degree
of verification through a standard set of metrics now
supported by universal industry standards that Mentor played a key role in developing.
Another innovation is to be able to look at what
verification comes from the emulator, what from
the simulator, and to look at how thorough the total
coverage is—how much redundant verification you
are doing. There are also tools: Mentor provides one
called InFact that effectively drives the simulator so
that verification is orthogonal, so that you don’t keep
reverifying the same thing but are instead verifying
new things. I expect there’ll be all sorts of other innovations coming along, and because the cost of failure is so high, we’ll increasingly have to verify large
portions of the software that’s going to run on a chip
along with the hardware before we release a design
to production.
Abadir: Let’s talk a little bit about emulation.
I know Mentor has been providing emulation technology for decades. Is there a point in time when
emulation becomes a necessity for a particular kind
of design market? How do you see that evolving?
Rhines: Emulation has been around a long time.
I first used it with chips that my group designed in
the 1970s, but it’s a difficult technology for people to
adopt, so they put it off as long as possible. The people who do graphics chips and video chips have been
using it because they had no other choice. The only
way you can verify those chips is to see the image,
and that’s through emulation. But it’s more recent
that the microprocessor and networking communities
have found that they too had to move to emulation.
In the past four years or so, the revenue for EDA
industry emulation has doubled because all sorts
of new user types have come in to emulation not
because they wanted to, but because simulations simply wouldn’t do the job. These new users just couldn’t
do enough cycles of verification to be confident in
releasing the chip. I think that trend will continue.
Ultimately, companies will have a combination of
simulation, emulator-based verification, probably
even some kind of FPGA verification, and they’ll have
validation suites after they get their prototypes back to
do the full job of verifying, because verification only
increases, and very rapidly, and that’s good. It keeps
our industry filled with challenges and allows us to
introduce new technologies that grow our revenue.
Abadir: Talking about increasing your revenue,
I know currently the big three EDA companies are
competing in that emulation space. How do you
view that—is there a winner?
Rhines: Mentor really enjoyed a wonderful
period over the last few years, because we had emulation capability that was not available from competitors. We were the first to move to virtual stimulus, to
high-level test benches, to hardware/software coverification. So it was quite an enjoyable period; we had
the market to ourselves and enjoyed the benefit of
that growth. Today, all three major EDA companies
have significant offerings in emulation, so it’s a much
more competitive market. The good thing is that
each of us has distinctions that make us better in specific applications or capabilities, but the sales cycle
has spread out and become more complex, because
users are evaluating more than one supplier. So,
like most of EDA, it’s more work now, but I think all
three major companies will continue to survive and
IEEE Design&Test
generate revenue from emulation because it’s a necessary piece of the whole verification solution.
Abadir: Let us switch gears and talk about some
of the new trends that are driving our daily lives and
how that impacts the semiconductor industry and
EDA. Let us start with big data and data analytics.
Rhines: For big data, there are really two areas
where the EDA industry and the semiconductor
industry are most significantly affected. One of the
biggest is the storage, processing, and analysis of data,
which is done on phenomenally large server farms
that become bigger every day, and that require leading-edge semiconductor components in the CPUs, the
servers, the networking, the gateways that feed big
data from various sources, and even the networks of
information collectors. This is all big semiconductor
revenue, a significant share of the total market. Those
chips are among the most challengingly designed, and
that affects us as well as the semiconductor industry.
The other area significantly affecting the EDA
and semiconductor industries is the way we use big
data. That’s very application dependent, but, for
example, in Mentor Graphics, there’s a major business in doing test analysis taking into account the
chip’s layout. Our customers run millions of hours of
analysis on all their tested devices to see if they can
identify second- and third-order systematic design
problems causing a yield loss. This test analysis is of
enormous value to them, because a few yield points,
as you know, can be worth millions of dollars. It’s
come to the point, frankly, where the customers who
do big chips and have sophisticated systems essentially analyze, in detail, every chip, every failure, and
they do it by analyzing the test results and comparing them to the chip layout. And that requires big
data analysis and data mining.
Abadir: Next topic: What is your take on the
Internet of Things?
Rhines: The Internet of Things is very interesting
because some of it has a lot of semiconductor content that affects the industry and will grow over time.
Most people are looking for the killer app that will
make IoT great for the semiconductor industry. The
largest part of IoT’s value is going to be harvested
by the application developers, rather than the silicon
being leveraged. But that’s not to say there won’t be
a lot of silicon at the base.
Self-identification radar, for example, is going to
end up in every car to broadcast its position, velocity, and other things. That’s a lot of silicon if you take
January/February 2017
every car in the world and consider devices of that
complexity, in the market today. The standards are
still a little fluid, however. Then there are things like
image sensors, which are already 3.5% of the semiconductor TAM, and we’re only in our infancy.
My car has just two image sensors on it now, but
cars are eventually going to have maybe a dozen—
my home’s going to have a dozen. There’re going to
be image sensors everywhere, and that’s big silicon.
That’s a lot of revenue. But, in general, the whole
idea of IoT is millions of data collection points that
have to be very low power and very low cost, or else
you can’t have millions of them. This will cause us to
look to the application developers.
Abadir: What do you think some of the killer
applications of the Internet of Things will be?
Rhines: We began the IoT discussion with processor data concerns. As for all the different applications that will be developed, they may need silicon
and sensors, but it’s the people with application expertise who are going to develop them, and where they’re
not primarily leveraging silicon. In this case, there is
no way to predict who is going to have the most exciting applications. Like any discontinuity in technology,
there will be innovative people from all over.
To attack that problem, Mentor Graphics acquired
a company called Tanner EDA, which offers a very
inexpensive custom design tool for the people who
aren’t expert in IC design and really don’t want to
spend the money for it. These Tanner tools are popular among big companies where there’s a small
group that doesn’t want to deal with the corporate
overhead of a big design infrastructure, and also
small companies where they don’t have the money
or specialized design expertise. The list of customers
is eclectic, and we can get all sorts of designs, a number of them for IoT-like things.
Tanner has the leading market share in MEMS
[microelectromechanicalsystems] design, with a very
large share of analog, mixed-signal type designs. That
helps us go after this unknown space of distributed
information collection that will be part of the IoT that
everyone is hoping will be a contributor to the semiconductor industry. Once again, I think it’s a game of
large numbers and low unit prices. But these sensors: if
there’re going to be a lot of them, they have to be inexpensive (and I’ve been in the part of the semiconductor business where you sell logic gates for five cents).
Abadir: Does this resemble the days when people provided design kits for, say, a microcontroller?
Rhines: If you want to go after the diverse market
of non-semiconductor specialists who are doing IoT
things, there are a number of ways to do it. One is to
build FPGAs, which is what a lot of these systems are
going to use for prototyping. Another is to use tools
similar to the ones from Tanner EDA, where you can
design reasonably complex ICs for a very low investment, and that don’t require enormous expertise in
the optimization and use of tools. The alternative is to
try to predict the parts that have significant value, like
image sensors and the wireless transmission part of
anything in the IoT, or a market like the self-identifiers
for automobiles, and go after those very specifically,
because the unit prices are higher for them.
Abadir: Much to consider, for sure. Something
else I want to ask is about your automotive experience.
Rhines: In 1992, Mentor introduced its first automotive design product, but our automotive history
actually goes further back. Because we started from
a system design perspective and worked our way
down to transistors, whereas other EDA companies
started at transistors and worked their way up toward
systems, Mentor has a disproportionately high market share and a lot of experience with automotive
and military aerospace companies. In 1992, we
introduced our first wiring product for automobiles
and airplanes, and it grew from there. In the late
1990s, we accelerated the effort because we could
see what was happening with semiconductor chip
design EDA software. The growth rate was slowing,
yet system design people had a much larger need
but hadn’t reached the point of complexity where
they needed to use design automation.
So over almost 30 years, we were able to gradually grow that automotive base, such that now our
automotive revenue has grown at a compound average growth rate of 25% over the past five years. Our
total transportation product revenue is about 20%
of our total revenue—a big part of our business and
growing rapidly. It’s growing because of electronic
complexity, and it includes not just the wiring, but
the embedded software, analysis tools, simulation
tools—all sorts of things that will be required as the
electronic systems in cars become more complex.
Abadir: That diversification is very clear in your
portfolio. Now, I have one question here regarding
security because it’s a vital topic and market. But, that
said, do you see it having major potential for EDA?
Rhines: It’s a valuable market because it’s
something that is not optional for the electronics
industry. The question is, who provides the value and
who gets paid for it. At Mentor Graphics, we started in
this field about five years ago and took a leading position in the Semiconductor Research Corporation’s
special thrust in security. SRC was uniquely qualified,
because it had all sorts of member companies that
did different things in the semiconductor and EDA
industries. This was anticipating a problem that had
not become particularly noticeable, namely the security of the silicon that underlies the electronics.
Over time, we identified and focused on some
opportunities for the EDA industry, and we’re just
now getting ready to introduce products. But there is
a very broad spectrum of possibilities for the future
at three levels. At the highest level, there is software
that can be used to identify or to hide problem
areas. For example, companies increasingly offer
products that generate a balanced distribution of
power so that people have difficulty using the power
signature of your cell phone to figure out your PIN
and other things. There are a number of other tools
that can make it difficult to do either thermal or RF
analysis of a chip where there might be embedded,
encrypted code.
A second level of the security problem is supply
chain security. There have been estimates as high
as 30% of the semiconductor supply being potentially contaminated with used chips or unauthorized
chips, and that is an EDA problem. We have the technology to put security into those chips so that the user
can, in fact, be certain that the chip is authentic, but
it costs money for the chip supplier. So our job is to
make it as inexpensive and as easy as possible, and
to allow the supplier to add it in at different places.
We might need to add security in the design phase,
or after final test, because whenever the device is out
of your hands and in the hands of an assembler, or
a wafer fab, or whatever, you don’t know who might
be meddling with it. One aspect of providing security
is to be able to provide it such that the chip is verified to be authentic, regardless of whatever point in
the supply chain you want to provide it.
And then the third level of the security problem
is embedded Trojans, and that’s probably the toughest one. We know that’s going on, it just doesn’t get
reported, usually, when it happens. There’s evidence
of embedded Trojans making their way into chips
where they can be dormant for a period of time and
then be activated remotely, or they can just sit there
and transmit data that you don’t want transmitted.
IEEE Design&Test
That’s a more difficult problem, although there are a
lot of levels at which you can attack it. We have a fair
amount of work going on in that area as well, but they
cover things, for example, such as the testing you do
on embedded IP intellectual property that is added
into your design. If you know what the Trojan looks
like, then you can formally verify that it’s not present.
You can also add assertions to provide security for
certain types of accesses that you want to prevent.
Also, you can add smart monitoring tools that will
monitor the bus of an internal processor and look for
unusual, suspicious transactions that are occurring.
There are additional techniques for security and
for supply chain security that cost more money but
which have been used by the military in the past. You
just have to make a cost-benefit trade- off. As security becomes more valuable, people will be willing
to spend more to assure that their chips are secure.
Abadir: Tell me about Mentor Graphics’ R&D
connection to Egypt. What’s been your experience
with that?
Rhines: It’s been excellent. The need for talented
people is unlimited, and if we restrict ourselves to
the ones who are close to us, then we’ve ignored
the vast majority of creative people that exist in the
world. There is an inconvenience associated with
being spread around the world, but there is also the
advantage that there are people who are well educated, smart, and innovative, who have limited job
opportunity. And we are able to grow the company
more by hiring people elsewhere who have talents
than if we simply hired people in the United States.
Egypt is just one of those cases: it has an excellent
educational system, and we have operated there for
over 20 years, so we have a good reputation. The
graduates from the leading universities have had
good experiences with us, so the word gets back,
and we’re able to do very successful recruiting. But
it’s just one of 36 sites around the world that do R&D
for Mentor, and we bear the burden of dispersion in
trade for the rewards of the talent.
Abadir: So that ties into the next question, which
has to do with your R&D philosophy, personally and
within Mentor. R&D can be conducted in academia,
consortiums, startups, inside big companies; it can be
incubated or can be a big, large internal R&D team,
and so on. What do you think is the proper mix?
Rhines: In Mentor, we have a small R&D organization that differs from traditional R&D in that our
organization is empowered to take products to
January/February 2017
market. They develop things that either nobody else
is interested in or that sees no immediate opportunity, and they’ve had a terrific track record of getting products into customers’ hands. Then maybe
those products aren’t quite right, so the R&D group
optimizes them until they’re stable and the revenue
stream starts to build. Then our product divisions all
of the sudden want and adopt the products, which
is unlike most central R&D organizations where the
adoption process is difficult because of the tendency
to want to reinvent the technology. In our case, to
the contrary, our divisions are anxious to take advantage of this new growing market that comes from a
tool or a product developed at a time when its significance was not obvious.
For example, AUTOSAR [AUTomotive Open
System Architecture]: in 2004, we got in that business in what we call our central New Ventures
Organization. Nobody had ever heard of AUTOSAR,
and only three companies—Mentor, Vector, and Elektrobit—developed tools for it. By the time AUTOSAR became important to the automotive industry,
we’d had 10 years of experience in developing and
supporting the tools. We limited the number of customers in the early phases so that we could optimize
the product, but we had customers that had been
working with us for years. So when AUTOSAR took
off, we were there. The things that result from nonlinear discontinuities—that’s what we do in our New
Ventures business, and that’s been very successful.
Abadir: Thank you. Now, a topic that always
comes to mind in relation to Mentor is Carl Icahn
and Icahn Enterprises. What might you say about
Mentor’s experience working with them?
Rhines: We did make a lot of money for Carl,
who is no longer a shareholder in Mentor Graphics.
We got some experience from the whole system
of how activists work—there were difficult periods,
but one of the things that happened in our case, and
I see happen in most other cases, is that once the
activist participates on the board and starts to understand the business, then they understand why we
make the decisions as we do, which is, of course,
exactly what happened at Mentor. Then the activist
investors either become supporters or they become
neutral. We’re pleased that Icahn Associates was
able to make a lot of money, and it really was a relatively short period before Icahn agreed that our strategy made sense and became a very positive contributor. So the outcome was very good, actually.
Abadir: Sounds good; it’s always nice to see a
happy ending! Okay, now I’d like to ask about yet
another topic, which is that, over the years, there’s
a lot of technologies that surface, and people talk
about its benefits, yet the technology doesn’t get fully
deployed for various reasons. Examples are builtin-self-test, formal verification, high-level synthesis,
hardware/software codesign, and the list goes on. I
heard you say recently, “No one changes until they
have to.” Is that why some of these things kind of
limp along, with the full potential not fully realized,
or is it a case where people oversold the promise of
something that isn’t really there?
Rhines: In a way, it’s all related. The adoption
of new technology is always difficult, and while it
may look promising and there may be enthusiastic
promoters, until you need it to solve a problem, it
sits around. The frustrating part is that you have to
continue to invest in the technology even though it
hasn’t taken off, because you don’t know when that
intersection point will occur when all of a sudden
the technology becomes a necessary part of the solution.
So you mentioned high-level synthesis—well,
Mentor developed our Calypto C high-level synthesis product about 15 years ago. We even spun it off
into a joint venture, then reacquired it, and so on. In
recent times, it’s taken off for all sorts of reasons, one
of which is image processing pattern recognition.
People want to write sophisticated algorithms and to
be able to implement them in silicon, and you just
can’t do that in RTL. You have to do that in C-level
or a SystemC source, depending on whether it’s control logic or algorithms and data path. There will be
other applications like that, but they come one at a
time, until something breaks. It’s similar to the emulation I talked about. Graphics required emulation
20 years ago; now networking requires emulation.
Abadir: We’re nearing the end of our interview,
so I’d like you to offer a couple words of advice to
our younger R&D readers. They might be in academia, they might be young engineers—what would
you tell them?
Rhines: I would tell them, “find out what it is that
really makes you excited, that you really enjoy.” The
most important thing is that you spend your life doing
something that’s meaningful for you, whatever it is
that excites you. Then, I’d say “try to build the skill set
so that when opportunities come along, you can be
part of things that make a big difference.” Because I’m
at a later stage in my career now, I get to talk to a lot
of people I’ve known along the way, many of whom
have risen to very high levels of management. What’s
interesting to me is when I ask, “What was the most
exciting part of your career?” Almost always, they
point way back to something such as “when I was
part of the team that made digital signal processing
possible,” or “I was part of the original team that wrote
the first basic interpreter for the PC,” or whatever it is.
The most exciting part mentioned is rarely the
apex of their career; it’s some time when things were
really tough and they did something that really made
a difference. Those are the things that, as you become
older, you look back and they make you smile; they’re
the memories that make for an outstanding career.
Abadir: Words to the wise indeed. So now let’s
say you have a crystal ball. What do you see in it for
the next five years, for EDA?
Rhines: The system part of EDA is going to grow
rapidly and eventually surpass the revenue of the IC
design part of the EDA industry. Maybe not in five
years’ time, but possibly a decade or so, I would
expect it to. We’ll continue the next five years, certainly, with challenges on the new nodes—7- and
5 nm, whatever the naming convention is at that
point. There’ll be plenty of work to do. But the semiconductor industry will probably go back to its more
traditional growth rate—3% to 5% is probably what
you’d expect for a stable industry. That means that
EDA will have to diversify to some extent, to continue to be a growth industry.
I believe that we will see additional waves of
semiconductor growth caused by new applications,
because we’re still rapidly reducing the cost per
transistor, and it’s that improvement that enables
new applications. The applications will be different,
because in 1990, half the transistors we built were
in logic and half were in memory. Today, 99.7% are
in memory. Who buys those? Well, Google uploads
a petabyte a day of data, and the cost differential
between rotating media and solid state is rapidly
reaching the crossover point, so enormous volumes
of solid- state memory are going to be required.
Another factor is that architectural innovation is
going to take off, because now the learning curve is
causing the cost per bit of memory to drop rapidly,
but the cost per logic gate is not coming down as
rapidly. We have to efficiently use memory, and fortunately what we find is that most of what our brain
does involves intelligent memory: the ability to
IEEE Design&Test
recognize patterns, the ability to predict patterns.
Hearing, sight, feel— these are all pattern recognition
problems that are done by having integrated memory and logic, but the logic is the minor part. Most of
our brain is memory. We’ve got a path ahead over
10 or 20 years for improving memory density, because
we’re building memories in the vertical dimension—
64 layers, multiple bits per cell— and innovating all
sorts of new structures.
Beyond that point, there’re all sorts of biological techniques to take it even further. So I think
there’s going to be a lot of architectural innovation,
processing—moving away from von Neumann architectures to things that can process patterns much
more efficiently in many fewer cycles with much
less power and energy usage. I think it’s going to be
an exciting period of innovation, and the trends indicate it’s going to happen quickly, because the von
Neumann architectures are just not suited to solve
what is becoming the most important problem: pattern recognition and prediction.
January/February 2017
Abadir: Thank you; thank you very much. On
behalf of Design & Test, we sincerely appreciate your
time and your very thoughtful perspectives.
Magdy Abadir is an independent consultant
and a member of the board of directors of a number of
EDA companies. He has 12 patents issued plus several
that has been filed. He cofounded and chaired a series
of international workshops on the economics of design,
test, and manufacturing and on microprocessor test
and verification (MTV). He has coedited several books
on those subjects, and he also published over 300
technical papers in the areas of test economics, design
for test, and design verification and economics. Abadir
has a PhD in electrical engineering from the University
of Souther California, Los Angeles, CA, USA. He is a
Fellow of the IEEE and a member of the Editorial Board
of IEEE Design&Test.
 Direct questions and comments about this article to
Magdy Abadir, Abadir and Associates, Inc., Austin,
TX 78701 USA;
Без категории
Размер файла
144 Кб
2623665, 2016, mdat
Пожаловаться на содержимое документа