This article reprinted from:

Bing Sheu, "Neural Networks and Beyond - An Interview with Robert J. Marks",

IEEE Circuits and Devices, September 1996, pp.42-44.

For a pdf version of the article, click HERE.



Neural Networks and Beyond -

An Interview with Robert J. Marks II

Professor Robert J. Marks II, University of Washington, Department of Electrical Engineering, Seattle, WA, r.marks@ieee.org has an impressive record of contributions to the IEEE. He is a Fellow of both the Optical Society of America and IEEE. In 1987, under then CAS President Ming Lieu, he founded the CAS Technical Committee on Neural Systems and Applications. He also served as the first Chair. As the CAS representative to the newly formed IEEE Neural Networks Committee in 1987, he was elected Secretary and then Chair. The IEEE Neural Networks Committee evolved into the IEEE Neural Networks Council in 1990. Marks was elected its first President. He now serves as Editor-in-Chief of the IEEE Transactions on Neural Networks and as the representative of LEOS on the NNC. He also serves on the Board of Governors of the IEEE Circuits and Systems Society and was the General Chair for the 1995 International Symposium on Circuits and Systems ISCAS in Seattle.

CAD: What is your view of IEEE in general and the Circuits and Systems Society specifically?

Marks: The Circuits and Systems Society remains one of the most entrepreneurial societies in IEEE. It has kept pace nicely with emerging technologies and is an extraordinary innovator. One project I am particularly exited about is electronic reviewing of IEEE Transactions manuscripts. The volunteers in CAS are riding point on initiating this within IEEE.

In the parlance of adolescence, IEEE rules. No one will dispute IEEE is number one in a lot of areas, including quality conferences, archival publication and regional activities. It is to electrotechnology like a TV camera is to Jay Leno - essential. IEEE's dominant presence in electrotechnology is well deserved. There is no other place in engineering where so many volunteers contribute to the advancement of a profession. There are things with which I disagree, but this will always be true in any large organization. Any oil viewed closely enough, though, has brush marks.

CAD: What are some of the things with which you disagree?

Marks: I wish there was a bit less inertia on top to the status quo. For example, in 1991, the Neural Networks Council decided to put conference proceedings on a CD ROM. We went to IEEE who raised all sorts of objections - the paper image wasn't of high enough quality - the CD ROM couldn't be compatible with UNIX, DOS and Apple operating systems. We went ahead and did it anyway. The 1992 International Conference on Neural Networks in Baltimore was the first IEEE Conference, to my knowledge, where the conference record was made available to participants on CD ROM. Russ Eberhart, the NNC President then, along with Greg Zick and Mani Soma at the University of Washington, got the job done. Today, IEEE makes CD ROMS available to any IEEE conference that wants them. This experience, though, shows a very positive aspect of IEEE at the society level. Innovators have the freedom to be inventive.

CAD: Are you aware of any similar innovations currently under way in IEEE?

Marks: Absolutely. Randy Geiger, a previous CAS President who is serving on the TAB Periodicals Committee, is looking into electronic publishing of IEEE periodicals. The current aim, as I understand it, is to allow members to have access to IEEE periodicals over the web followed at the end of the year by a CD ROM copy. This allows immediate access to IEEE members outside of North America. There are numerous other advantages. Many issues need to be worked out. I'm sure the final result will be awesome.

CAD: You have been heavily involved with the IEEE Neural Networks Council including being the first NNC representative from CAS. Has neural network research peaked?

Marks: Goodness no! On the contrary. Neural networks have spilled into a number of fields including fuzzy systems and evolutionary computation. Lee Giles, a fellow neural smith, did a data base search and found there were over four times as many neural networks papers published in non-neural networks IEEE Transactions than are published in the IEEE Transactions on Neural Networks. This shows neural networks are flourishing in application and implementation. I have tracked both the US patents and publications activities in computational intelligence and recently included the results in a talk I gave at the 1996 International Conference on Neural Networks. Editor's note: ``Neural Network Evolution: Some Comments on the Passing Scene'' is printed in the conference proceedings. In both cases, the numbers continue to increase. In 1994, the last year where nearly complete data is available, there were over 8,000 publications in neural networks and over 11,000 in computational intelligence. There were 250 patents in 1994 involving either neural networks or fuzzy systems. This compares to about 60 for AI. Patents reflect the application and implementation activity of a technology.

Neural networks, in particular, continue to have a significant impact on engineering. In the 1994 Journal of Citation Reports, of the 138 electrical engineering journals ranked, the IEEE Transactions on Neural Networks scored number five in terms of impact. In three other categories in artificial intelligence, the TNN ranked first in two cases. In the remaining case, it ranked third. (Editor's note: Marks has confirmed the TNN ranked number one in the categories Computer Science: Theory and Methods and Computer Science: Hardware and Architecture.) It ranked number three in the category Computer Science: Artificial Intelligence. Two golds and a bronze. For those interested in specifics, I wrote a short editorial on these rankings in the July 1996 issue of the TNN.

CAD: What is your favorite neural network application?

Marks: In terms of pure novelty, my favorite neural network application is the use of neural networks by Bill Clinton's relelection people to identify the swing voters for this year's election. (Editor's note: The reference cited by Marks is a syndicated column by Robert Novak dated February 18, 1996. It appeared in a number of subscribing newspapers.) They identified two parent families whose hobby was bowling. I would enjoy seeing the specifics of this neural network.


In another application, neural network was also used to pick bad Chicago policemen. (Editor's note: Scientific American, December 1994.) Although the experiment was deemed a success, the policeman's union successfully challenged the results, saying there was no specific reason for the bad cop classifications. In other words, the neural network was deemed a black box with no explanation facility. Whoever did the neural smithing on this project was not intimately familiar with the field of neural networks. There are a number of ways to extract an explanation facility from a trained neural network.

CAD: You state that the IEEE Transactions on Neural Networks, of which you are now Editor-in-Chief, ranked high in the area of artificial intelligence. Yet you cluster neural networks into an area called computational intelligence. What is the difference?

Marks: Good question. In terms of dictionary meaning, many neural networks are clearly artificially intelligent. Artificial intelligence, as a field, however, is identified specifically with heavily heuristic approaches using what Jim Bezdek calls ``knowledge tidbits". The field and the practicing community of conventional AI is distinctly different from those of us in CI. In the early 1990's, we were looking for a term distinct from AI that covered neural networks, fuzzy systems and evolutionary computation. There was a flurry of e-mails among the Executive Committee members of the IEEE Neural Networks Council trying to decide what this new field should be dubbed. The urgency was the naming of the Neural Network Council's World Congress in 1994 which brought together major conferences in neural networks, fuzzy systems and evolutionary computation. ``Intelligent systems" was suggested. But, like AI, ``intelligent systems" has a distinct meaning in the technical community. Jim Bezdek offered computational intelligence and it resonated nicely. Using queries to a publications data base, I did a terse statistical study of the intersection of AI and CI in a September 1993 TNN editorial.

The 1994 World Congress on Computational Intelligence was christened. Held in Orlando, it was a full blown success technically and financially. The next, in 1998, will be in Anchorage Alaska.

CAD: I have heard you say the impact of the congress was also significant.

Marks: In terms of establishing CI as a discipline, absolutely. Graduate curricula in CI have been developed. IEEE Press now uses CI as one of its marketing categories. Books are now being published with CI in their titles. Before WCCI, there was no such activity.

CAD: Can you give a definition of "computational intelligence''?

Marks: The definition is still evolving. It is meant to umbrella a number of disciplines including neural networks, certain fuzzy systems and all of the sub-areas of evolutionary computation. Those interested in a discussion should see Jim Bezdek's chapter in Computational Intelligence: Imitating Life edited by Zurada and others. Russ Eberhart takes a somewhat different view in his chapter in Computational Intelligence: A Dynamic System's Approach edited by Palaniswami and others. Both are published by IEEE Press.

CAD: Besides your work in IEEE, you also have a reputation as a top notch researcher both in neural networks and other areas. How many papers have you published?

Marks: A lot. Many I am quite proud of. Pure publication quantity today, however, has become an meaningless metric. If there is no concern about the quality or impact of the publication, one can publish almost anything today. Another aspect of IEEE Transactions I appreciate is the continuing effort to publish only the best. Also, because of IEEE's reputation, the best researchers typically submit their papers to IEEE.

CAD: Which of your neural network publications is your personal favorite?

Marks: Let me give you two answers. In terms of theory, a recent TNN paper I wrote with Russ Reed and Seho Oh showing the equivalence of a number of regularization techniques in layered perceptron training is an exiting piece of unification theory. I'm also a great believer in applications research. Application problems lead to incredible theoretical questions and exciting research opportunities. With Mohamed El-Sharkawi at the University of Washington, and some great grad students, we have written papers using neural networks in power systems. One introduces use of neural networks to forecast power loads. This paper, I think, has been cited more than any paper on which I have ever had my name. It has found its way into three reprint volumes. My wife's grandfather was fond of saying if he had known how long he would have lived, he would have taken better care of himself. If I had known how successful the load forecasting paper would become, I would have spent more time polishing it. Power engineering is one great area where there are a plethora of interesting problems where neural networks and CI can be applied.

CAD: What are some other areas?

Marks: Let me first wax categorically. All engineering fields are either solutions looking for problems or problems looking for solutions. The IEEE Societies concerning power engineering, bio-medical engineering, oceanic engineering and industrial application consist of practitioners with problems looking for solutions. Signal processing, neural networks, fuzzy systems, evolutionary computation, artificial intelligence and even the broad area of computer engineering consist of practitioners with solutions looking for problems. In terms of engineering, the best world is obtained when the former meets the latter. This is why I am so fortunate to work with Mohamad El-Sharkawi. We resonate.

What are some areas where computational intelligence can be applied? The converse would be a question I would have problems answering. One of my personal favorites is finance. A number of institutions, including Berkeley and MIT, have established programs in computational finance. An annual IEEE conference, co-sposored with the International Association of Financial Engineers, held in New York, is the best symposium dedicated to this topic. It's called the IEEE/IAFE Conference on Computational Intelligence for Financial Engineering CIFEr. The next one will be in April in New York City.

CAD: You are the co-General Chair for this conference, correct?

Marks: Yes. Thanks for the plug.

CAD: Which of your publications outside of neural networks is your favorite?

Marks: That would have to be my book Introduction to Shannon Sampling and Interpolation Theory published by Springer-Verlag. I worked very hard polishing that book. It has more information on Shannon sampling theory in it that one person could ever use. The book was recommended in IEEE Signal Processing Magazine as suggested reading for those interested in deep DSP. I'm very proud of it.

CAD: What are your current research areas?

Marks: In neural networks, I have worked closely with Michael Healy and Jai Choi at the Boeing Airplane Company for a number of years. We have tackled some amazing problems ranging from brake control to CAD compatibility. Boeing, to my knowledge, was the first major industry to apply neural networks to an ongoing system. They use an adaptive resonance neural network system for parts classification.

I continue to work with Mohamad El-Sharkawi on a number of exciting projects in neural networks. An NSF sponsored project deals with use of neural network novelty filters to detect shorted windings in rotors. The rotors we are examining can weigh several tons. Shorts can lead to vibration that can lead to the rotor breaking through the stator and landing on some innocent bystander. We have worked with Dr. Isidor Kerszanbaum of Southern California Edition in a number of interesting power engineering problems. Our current effort is using computational intelligence in dynamic security assessment.

Outside of neural networks, I have been working with Paul Cho, a UW oncologist, on the problem of designing beams to irradiate tumors of arbitrary shapes. I find the area of bioengineering fascinating.

CAD: What are your future projects within IEEE?

Marks: For the next few years, I will be working with Greg Zick, the Chair of the Electrical Engineering Department at the University of Washington, as the Department's Graduate Studies and Research Chair. It's an exciting position with opportunities for contribution and innovation. I have heard it said that the secret of doing many things at the same time is to do them all poorly. Since I don't like doing things poorly, I either have to quit teaching, curtail my research activities or reduce my involvement with IEEE. The first is not an option. The second is unthinkable. My involvement with IEEE will therefore be less than before.

CAD: What do you see for the future of computational intelligence?

Marks: Forecasting the future of technology is risky. Predictions tend to be linear whereas technical advances come in quantum jumps from paradigm shifts. After the second World War, forecasters in electronics would have linearly forecasted breakthroughs in better vacuum tube reliability from, for example, improved filament chemistry. In the early 1940's, Thomas Watson, the Chairman of IBM, predicted a world market for five computers. Even Bill Gates predicted in the early 1980's that 640k ``ought to be enough for anybody". These predictions are all linear.

Predicting the future of CI and AI is similarly dangerous. My guess is that the next advances in computational intelligence will come from the imagination from those not rutted in current trains of thought. Attempts are currently being make to artificially evolve a mammoth artificial neural network. Some claim artificial conscienceness - whatever that is - is possible. I had a graduate student propose using an actual culture of bacteria controlled by external stimuli to perform computationally. The proposal was either very dumb or incredibly brilliant.

What is the future of computational intelligence? "The best way to predict the future is to create it''.


Link to Issue Cover