
Updates
Updates to this site are added at the bottom. The last update was made on June 8, 2016 and relates to the question if we live in a computer simulation.
Fortitudinous scientists and inventors with uncanny conceptual facilities
Lemmata
The correct Greek plural of lemma is lemmata.
In mathematics, a lemma is a proven proposition which is used as a stepping stone to a larger result rather than an independent statement, in and of itself. A good stepping stone leads to many others, so some of the most powerful results in mathematics are known as lemmata: Zorn's lemma, Bézout's lemma, Gauss lemma, Fatou's lemma, Nakayama lemma, etc. (see: Wikipedia).
Fortitude
Strength of mind that allows one to endure pain or adversity with courage. (see: http://www.thefreedictionary.com/Fortitudinous)
Lemma scientists, engineers and inventors
Sometimes one can use an unproven lemma and start building a theory upon it; even though its correctness is questioned by others.
Dissent from existing theory or practice, especially by scientists or inventors who are not part of the "establishment" is a known aspect of the history of science. Often, these researchers are difficult people, sometimes identified as "crackpots". They often challenge the fundamental beliefs of current science. Sometimes their transition into acknowledged science is relatively smooth. Others may have much more trouble.
With no significant source of income, often ignored or ridiculed by contemporaries, they manage to lead the way into new directions. Some of them receive recognition posthumously, even fewer receive recognition during their life time. Many of them are forgotten.
In science, technology and engineering almost nothing is easy. Every solution appears to bring its own problems. The safe way is to remain on the proven path. To realize that the proven path may be wrong and that a new approach may be required abhors many if not most people. It usually means dissent, disagreement and struggle. Despite popular belief that scientists embrace change and new theories, the opposite is arguably the case. Exceptional efforts are made to explain new phenomena or to address an apparent paradox with the tools and means of an existing theory, rather than apply a new theory.
Developers of new theories are usually first vetted to assess their status in the establishment. Different kind of pressures will be brought to bear for the daring scientist to either recant the new theory or at least to express severe doubts about its validity. Theorists from outside the establishment may be ignored completely or are labeled as crackpots.
This attitude of skepticism is not unreasonable. There are not that many truly novel and valid theories; many theories are truly crackpot ideas, or supporting experiments are flawed. Experimental errors or uncertainties have demonstrated to contribute to creating invalid theories. The science community has demonstrated often the culpability to new unproven theories to instill at least a sense of skepticism. Coldfusion comes to mind. Though, the skeptics were able to quickly disprove the occurrence of that phenomenon.
To take a new path against disapproval of authority or common belief is almost an impossible challenge. It takes an unbelievable conviction, stubbornness, intuition and intelligence. And yes, fortitude.
Two influential "lemma scientists" who made a tremendous impact on science were Joseph Fourier and Oliver Heaviside. They are both connected to explaining transmission of signals. Both have developed insights into fundamental aspects of mathematics which were initially doubted and of which the theoretical correctness was eventually proven. What is striking is how convinced both men were of the correctness of their assumption and the tenacity with which they calculate their way to a solution. Both men have written books with page after page of equations. A lesser person would most likely have given up half way, being convinced that all of this will lead nowhere. A nice overview of Fourier Analysis can be found in The Mathematical Experience by Philip Davis and Reuben Hersh.
Chester Carlson
Chester Carlson was the inventor of the Xerox process. His name will be unknown to many people. His story is truly one of "ragstoriches" His idea of how one can make a copy of an image by using electrostatic effects is one of absolute genius. The astonishing aspect is that there is really no "hindsight" effect, whereby someone could say "yeah, that makes sense, I could have done that." If Carlson had not made and pushed his invention, it is not unreasonable to assume that a copying machine would not have been invented for another 15 to 25 years. This invention is not a "race to enablement" such as happened in the case of the transistor. It is a pure, original, and breakthrough invention based on an uncanny insight and a show of great fortitude to get it commercialized. In this age of super egos undeserving persons are often held to be a "role model", the only reason being money or success. Here is a true role model: bright, courageous, persistent, hard working, able to learn from mistakes, generous and modest.
Joseph Fourier
Joseph Fourier was part of an astonishing period in French history, not only politically but also academically. France during that time could be considered the "Silicon Valley" of Mathematics and Sciences. Fourier's work: "Théorie analytique de la chaleur" is still very readable, though it contains page after page of calculations, demonstrating how he trusted his lemma. Fourier's work finds wide application in electrical circuit theory. Fourier had difficulties in getting his 'Memoir' on heat accepted in 1807. Lagrange and Laplace opposed his approach (or lack of theoretical rigidity) of expansion of functions into series. Thomson (Lord Kelvin) used Fourier's approach in capturing the "diffusion" of electricity into a cable. A copy of "The analytical theory of heat" as published in 1878 can be viewed or downloaded from the excellent site www.archive.org . Fourier's book can be found at this link.
Oliver Heaviside
Oliver Heaviside is an almost forgotten "selftaught" scientist. Almost forgotten, we should add, as many of us still will recognize the name in the Heaviside Step Function, much applied by Heaviside himself to investigate transmission effects.
Heaviside should have been honored with a Nobel Prize. There is no part of electromagnetic and electrical sciences that he did not influence or help develop. The modeling of behavior of signals in electrical network analysis as we currently apply is from his hand. His "lemmata" approach especially in his Operational Calculus was often criticized, while he was fueling the flame of criticism by maintaining that mathematics is an experimental science. However, no one was able to come up with a better way to describe transients in electrical transmission, until it was finally accepted that Laplace transforms could do the trick. A publication on the Laplace transform, just before WWII, changed the acknowledgment of Heaviside's contributions in standard Electrical Network Analysis text. His name disappeared virtually overnight from textbooks published postWWII.
An instructive website comparing Heaviside's Operational Calculus with Laplace transforms can be found at "Heaviside, Laplace and the Inversion Integral". A more detailed explanation is provided in "Heaviside Operational Rules Applicable to Electromagnetic Problems" by I.V. Lindell. It provides a further explanation of Heaviside's favored series expansions.
Heaviside had an acid pen and a sharp tongue and his polemics are still very funny to read. If you think your professor was difficult: this was one scientist who did not suffer fools lightly. (see: the eminent scienticulist Preece). The same excellent site of www.archive.org also has some major Heaviside works. Please visit this site for Heaviside' Electrical Papers Part 2. This work contains some of the comments by Heaviside on Preece's technical skills.
Not unlike Fourier Heaviside starts with some assumptions and calculates his way to a solution. He started his scientific career with working on his uncle's (Wheatstone) theory. With no formal training in theory he ends up with pretty much creating the foundation of electrical theory and articulating electromagnetic theory. Ido Yavetz in his excellent but difficult to find book "From Obscurity to Enigma" makes the case how Heaviside after going through extensive calculations always went back to a fundamentally physical interpretation of the results. Yavetz details Heaviside's belief in the existence of a transmission medium, perhaps the aether, for propagating a field. In several chapters Yavetz points at Heaviside's "inability to resist a caustic remark". This is a great book, that is now available in a Kindle edition. Recently (2011), a cheaper paperback edition of this excellent book has been published by Modern Birkhäuser Classics.
Heaviside, a man of incredible brilliance and courage, a superstar scientist and unjustly forgotten.
Paul Nahin wrote the outstanding "Oliver Heaviside: Sage in Solitude" which I can recommend to anyone who likes reading autobiographies, but even more so to people who are interested in the history of sciences. Pupin received a patent for inventing the "loading coil", which enabled long distance transmission of signals of limited bandwidth without using amplifiers by flattening and lowering the attenuation of a transmission line over the limited bandwidth. The actual invention of the concept is by Heaviside, picked up by several researchers such as John Stone Stone. The reduction to practice is by George Campbell (the inventor of the wavefilter). The patent and the money went to Pupin. Pupin was a highly productive inventor. He was also pretty good at self promoting and earned a Pulitzer price for his autobiography. In this book he claims that his insights explain why radio communication between planets would be impossible.
Because of the controversy it is easy to assume that Pupin was a fraud. That he was certainly not. However he was wrong at some occasions. For instance Pupin obtained Patent 519,346 entitled "Apparatus for Telegraphic or Telephonic Transmission" in 1894, wherein he claims improving the impedance by adding capacity to cable sections. However, he got it somewhat right in and obtained the relevant Patent 652,230 in 1900 over Campbell. He was a bright scientist and a gifted inventor. His patents can be found on Google's Patent site. This site is worth a visit as it allows searching on pre1976 US Patents.
An analysis of the 'loading coil' affair by James Brattain can be found in the book "The Engineer in America" under "Introduction of the Loading Coil". An analysis what happened inside AT&T in pursuing the 'loading coil' patent is described in Wasserman's "From Invention to Innovation". Norbert Wiener was upset by the treatment of Heaviside and wrote the book "The Tempter". A highly recommended but difficult to find book is "From Obscurity to Enigma: The work of Oliver Heaviside, 18721891" by Ido Yavetz. A very good book that puts Heaviside in the context of articulating Maxwell's laws is "The Maxwellians" by Bruce Hunt.
Preece receives a much more deferential treatment in Russell Burns' book "Communications: An International History of the Formative Years". Preece, at that time the EngineerinChief of the GPO, who is of course a very influential civil servant, is more in support of Marconi than of Oliver Lodge. A understatement on page 296 is "Indeed, the suggestion has been made that Preece was being vindictive...." Oh, really? Reading Burns' book one realizes that Preece was the contemporary of Hertz, Lodge, Fitzgerald, Heaviside and Marconi. As a "practical man" he consistently is on the wrong side of scientific arguments. Still, he achieves a fairly exalted and influential position related to the science of which he learns very little, it seems.
Oliver Heaviside is the named inventor on at least one British Patent (No. 1,407) in which he establishes himself as the inventor of coaxial cable to limit inductive coupling between adjacent cables. A description of this patent can be found in Nahin's book of which a section can be found here. UK Patent 1407, including the provisional specification can be downloaded by clicking here. This copy of the Heaviside patent was found on the website of the German Patent Office and can be downloaded here.
The patent addresses the issue of inductive coupling. The coaxial cable is actually one of several solutions that Heaviside provides to this problem in his specification. His other solution is a cable with two pairs of circuits, thus forming a 4 conductorcore cable. The filing date of the patent is April 6, 1880. UK patents at that time did not require claims. Though the Patent includes the declaration "...but what I claim is, " it does not have claims. Lacking enforceable claims and means to pursue infringers in court, it must have been an almost insurmountable task for Heaviside to pursue infringers on this patent. Heaviside probably, based on this experience, must have decided that patents were not for him.
The Internet and Prime Resources: Vectors and Quaternions, Complex Functions
One of the great benefits of the Internet is that access is now being provided to original documents that were crucial in technological developments and that were previously hard to get to for people with no access to university or institutional libraries.
The fundaments of electronics, network theory, electromagnetism, wired and wireless communications including the supporting mathematical theories were established in a short period of less than 100 years from the early 1800s to the early 1900s. The theoretical development accelerated tremendously from about 1870 to 1905.
This period is fairly unique in the sense of a broad but still limited number of important researchers and their produced papers, books and patents. Present day science is highly specialized and it is almost impossible to get a comprehensive and understandable overview of a field of research: there are too many papers and articles, the mathematics is too detailed and too complex.
The period of 18701905 in electrical sciences is special as it develops advanced new tools almost from scratch (such as vector analysis, complex function theory, differential equations) in parallel to insights into the physical aspects of electrical and electromagnetic phenomena.
One interesting struggle was between proponents of Vectors (Heaviside/Gibbs) and Quaternions (Hamilton/Tait). An outstanding book on the issue is Crowe's History of Vector Analysis which is available for review on THIS WEBSITE.
An interesting online book is A Historical Study of Vector Analysis by C.T. Tai of the University of Michigan. The book (and other studies by Dr. Tai) is focused on the del or nabla operator as used in vector analysis. One of Tai's conclusions appears to be that Heaviside's contributions to vector analysis are secondary to Gibbs', as Heaviside was more of a user of vector analysis, rather than a mathematician developing a theory for its own sake. This is a bit strange in view of both Heaviside and Gibbs developing vector analysis independently. Their common background was Maxwell's use of quaternions, and both found quaternions wanting.
It was again Heaviside who introduced complex mathematics to electrical theory. One of the more baffling aspects is that Heaviside published his highly mathematical articles in "The Electrician", a journal for the practicing electrical engineer.
A critical development was the introduction of complex functions as used by Heaviside into electrical engineering studies. Kennelly and Steinmetz were two scientists and educators who formalized use of vector notation and complex functions in electrical engineering education. Kennelly was an assistant of Edison, whose job it was to investigate electrocution. Steinmetz, who joined General Electric, was a socialist activist who had to flee Germany. Their groundbreaking and very readable books can be found on www.archive.org.
What is electricity?
Electricity (not even electro magnetic waves) is mostly considered a carrier: either of information or of energy. Not many people still wonder what electricity actually is and how it moves. Edmund Whittaker in his famous book "A history of the theories of aether and electricity : from the age of Descartes to the close of the nineteenth century (1910)" which is available on www.archive.org spends a couple of chapters on early experimentations around electricity and electrical effects. It was early on not even clear that static electricity and currents or dynamic electricity were actually based on the same phenomenon. Only after being baffled by cathode ray effects did J. J. Thomson finally resolve in 1897 that electricity is basically an effect created by particles, or electrons. However, the theoretical existence of electrons was already postulated by Larmor who got stuck in aether theories around 1895.
The amazing aspect is not that it took so long to discover the electron. The stunning aspect is that all major theorems that describe voltage, current, the electric, magnetic and electromagnetic field, as well as the formulation and detection of electromagnetic waves in Hertzian sense (1887), were established well before 1897.
The electromagnetic induction effect was discovered in 1831 by Faraday. One of its strange properties of the forces being perpendicular to each other, which of course is expressed in one of Maxwell's laws as a vector field described by a curl. Such forces were known and formulated in fluid dynamics, for instance by Helmholtz in 1847. By analogy, it was believed that electromagnetic induction could be explained by vortices in an aether. According to Whittaker, Maxwell's model of the electromagnetic field resembled that (of a mechanical model) proposed by Bernoulli in 1736.
The strange thing herein is that the field and wave equations of the electromagnetic field turned out to be correct, and were articulated even before Hertz did his experiment (1887). It is even stranger from an epistemological point of view that no aether is required and is determined not to exist, and that a luminiferous aether probably does not exist.
The aspect of the mechanical view of the electromagnetic field is well researched and documented. For instance the outstanding book "Innovation in Maxwell's Electromagnetic Theory By Daniel M. Siegel" analyzes Maxwell's mechanical models, and analyzes Maxwell's approach and the difference with the "actionatadistance" approach by Ampere and Weber based on a Newtonian model. This aspect is also extensively described in Olivier Darrigol's "Electrodynamics from Ampere to Einstein" and in Whittaker's book. Hunt's "the Maxwellians" spends a chapter on the mechanical models of the aether, such as the cogwheel aether of Lodge. The main drive behind the aether theories was to explain electromagmetic phenomena in mechanical or fluid mechanical terms. Hunt also analyzes Larmor's rotational fluid aether and how Larmor made his and other aether vortex theories required for conductivity and displacement obsolete by "inventing" the "monads" or free electrons.
An important aspect of Maxwell's laws related to the concept of aether is the existence of the displacement current as depending on a changing displacement field D. The displacement for Maxwell was a literal displacement due to stress in the aether. It is now well accepted in physics that such a current does not exist as a real current, and that it is a quantity defined as being proportional to the time derivative of the electric field (see for instance this Wikipedia site ). The displacement in a medium is the polarization of such medium.
Displacement of vacuum has no polarization equivalent, unless one introduces the aether.
The explanation of D in relation to E may take different forms, and it is worthwhile to do some research on the different explanations. In general D is associated with the polarization of vacuum e.g. the aether. An article by Petr Šlechta provides a nice explanation of D. Most text books on electromagnetic theory still still apply the concept of a displacement current. In that context it is interesting to read the preface in Richard Becker's textbook "Theorie der Elektrizität" published in 1933. Especially the remark that "Demgegenueber hat die heutige Physik die mit der mechanischen Aethertheorie eng verbundene prinzipielle Unterscheidung zwischen D und E fallen gelassen." He goes on to call the relationship between D and E as an arithmetical trick ("Kunstgriff") by "Elektrotechniker" to provide a comfortable fit of formulas related to permeability and dielectric constant.
Another example of despising the D is provided by Yavetz in his book on Heaviside on page 165: "In the approach we have taken to electric fields in matter the introduction of D is an artifice which is not, on the whole, very helpful. We have mentioned D because it is hallowed by tradition, beginning with Maxwell, and the student is sure to encounter it in other books, many of which treat it with more respect than it deserves." (from Edward M. Purcell, Electricity and Magnetism).
Recognition of Heaviside's eminent role in articulating Maxwell Laws fades fast over time. Föppl in his "Einführung in die Maxwell'sche Theorie der Elektrizität" in 1894 states: "Ich halte Heaviside für den hervorragendsten Nachfolger Maxwells..." Becker in 1933 is largely silent about him.
The above does not imply that we should pity Heaviside's active period. First of all, Heaviside was not a man to be pitied. He was quite opinionated and very well able to defend himself. Secondly, in a time in history wherein certainly in Britain social class was extremely important, Heaviside, without any formal education, positioned himself as a leading and very much respected scientist who was recognized, corresponded with and consulted on important and critical issues and at least the equal of other scientific giants of that period. Heaviside's is a story that would fit very well in an American ragstoriches novel, with the exception that it took place in one of the unlikeliest places. Britain, despite what we are sometimes led to believe, actually has a history of offering leading scientific positions based on merits, rather than class. Faraday is certainly an example of that. However, it is sad that Heaviside was not able to convert his scientific skills into at least some level of wealth or comfort as was achieved by people like Thomson (Kelvin) and Pupin. This, I believe, made his later period uncomfortable, certainly much less comfortable than he deserved, and it affected his productivity and his engagement to scientific issues.
In a strange writeup the IEEE contends that "As on (sic) old man, Heaviside spent his final years comfortably, although his mental powers diminished. "I have become as stupid as an owl," he once bluntly stated. Heaviside died at the age of 74." IEEE editors should read the IEEE published "Sage in Solitude" and remove that paragraph from their website.
Continuation on the Displacement Current
So, if there is no displacement or displacement current, why is it still used in e.m. field theory? Good question!
I believe one probably has to reverse question and answer. The answer is that Maxwell's equations are correct. In particular the wellknown derivation of the wave equation in free space by applying the differential form of the AmpereMaxwell equation (by evaluation the curl on both sides of the equation, and appropriately simplifying) requires the use of the "displacement" term. If such displacement term were not included, no wave equation would be enabled, and hence no e.m. wave would be possible. In general the displacement term is introduced in the example of a charging capacitor, wherein the displacement term is caused by a polarization effect.
In the steady state the magnetic field depends on the current. One may express this a the curl of the magnetic field is dependent on the current: curl(B) = g(I). One may then take the divergence of such an equation (on both sides of the equal sign). This would mean that div{curl(B)} = 0, because the divergence of the curl is always zero. This means that div(g(I)) = 0. That means that the divergence of the current is always zero or the flux is zero. In cases, such as a charging capacitor, but also in a closed surface in other situation the flux is not always zero, an accordingly some correctional factor is required.
Because the Maxwell equations are correct and the wave equations are correct, it must be assumed that the added displacement term is correct. From a phenomenological perspective the Maxwell equations correctly describe the physical occurrences. Maxwell himself created a vortex model of reality applied to an all pervasive aether. Unfortunately, we cannot prove the existence of such an aether, based on physical properties it is supposed to have. So, in a way we are stuck with equations, that correctly describe physical phenomenon, for which we have no further visualization.
An alternate way to determine Maxwell Equations
So, what is clear is that a displacement current may be assumed to not exist. This point has been made over and over again by leading scientists. However, the point also appears to be mainly semantic, as they still use the term based on change of field over time as contributing factor in the AmpereMaxwell equation. Even though we don't call that term a displacement current anymore, it is still there, and it is still critical to determining the wave equation for the electromagnetic field.
Almost without exception it is pointed out by authors that the introduction of the displacement current by Maxwell into Ampere's law, was an absolutely brilliant move. Even though Maxwell assumed an aether in empty space, and such an aether is now believed not to exist, he predicted the possibility of an electromagnetic wave in free space (which is not a conductor) and made the connection between e.m. waves and light.
The unsatisfactory issue is that a valid theory was created from an incorrect physical model (assuming that that an aether does not exist). This raises the question if one can derive the AmpereMaxwell equation in perhaps a different way that will yield the "displacement current" term, without having to make the Maxwell assumptions of vortices in aether and displacement of the aether.
It turns out one can. It is meant to provide an "alternate route" to AmpereMaxwell. But remember, it is done in hindsight, after Maxwell did the heavy lifting. Such a derivation was provided by Robert S. Elliott in his outstanding book: Electromagnetics: History, Theory, and Applications. Professor Elliott uses Coulomb's law and the Lorentz transformations to arrive at the AmpereMaxwell equation. The book is a delight to read, including because of its historical notes to each chapter.
A similar derivation of the AmpereMaxwell equation can be found online here, by Richard E. Haskell.
The Speed of Electricity
The above was initiated by a question of speed of electricity, which was not answered above. It is generally known that the propagation speed of electricity is in the order of the speed of light (about 2/3 of that in copper wire.) So, what is the actual speed of charged particles (electrons) in a wire? The answer may surprise you. It is in the order of about 10 cm per hour for a DC current. Want to know why? Take a look at this website.
A Text Book on Maxwell's Equations
I noticed that some people are directed to this website on a query related to an introduction to Maxwell's equations. Unfortunately, explaining Maxwell's equations is not the purpose of this website.
I can refer people to several outstanding books on Maxwell's equations and on what is generally known the EM Field.
1. Fundamentals of ELECTRIC WAVES by Hugh Hildreth Skilling (1948)
I like the flow of the development of theory and the examples. This book is part of my collection. I am not sure anymore why I bought it. But I picked it from the shelf as a refresher. I particularly liked Chapter 8: Maxwell's Hypothesis. The book also explains very well both the 'differential' and 'integral' form of the Maxwell equations and what the significance of these forms is.
2. A Student's Guide to Maxwell's Equations by Daniel Fleisch (2008)
A Chapter (actually more) per Maxwell Equation. What I liked in particular is the effort to explain when to use the integral form and when the differential form of the equations. Another great benefit is Professor Fleisch's approach to explaining the meaning of divergence and curl, beyond their common "vector field" definitions. The book is supported by a very helpful website here, which has podcasts on each chapter.
As a nonregular user of EM theory, I am actually taken aback how much effort it takes me to refamiliarize myself with the matter. It tells me that someone who faces the equations for the first time probably should take sufficient time to study EM field theory. For most people like me there will be no shortcut. Take your time.
The Model becomes the Reality
One of the philosophically strangest developments in electrical engineering is the creation of Digital Signal Processing. A device such as a filter, which originally was made from for instance resistors, capacitors and/or self inductors can be modeled with mathematical tools such as a complex transfer function in the frequency domain or a convolutional model in the time domain. It was understood that the mathematical model was a description and an approximation of reality. Nowadays a model of a filter is implemented in software and executed. With the help of A/D and D/A converters the model has now largely displaced the actual circuit of a filter. The two (or actually three) developments that make manipulating timediscrete systems possible are:
 the sampling theorem;
 the Discrete Fourier Transform (DFT); and
 the ztransform.
The sampling theorem, not unlike the DFT, appears to have been reinvented several times. Most familiar as inventors of the sampling theorem are Nyquist (1928) and Shannon (1949). One is referred to an excellent Wikipedia article on the subject of the sampling theory, wherein it is suggested that we could call this the "the WhittakerKotel'nikovRaabeShannonSomeya sampling theorem".
The DFT appears to also have been discovered and rediscovered several times. A very good book on the subject is "The DFT: An Owner's Manual..." by William Briggs and Van Embden Henson. of which sections are available online. The book has an interesting historical introduction (which is available for viewing at the earlier mentioned web site), wherein it is pointed out that Gauss already applied a DFT if not an FFT. The historical introduction shows also a page from Lagrange on a vibrating string problem written in 1759, demonstrating a DFT. This is interesting in the context of Lagrange's critical attitude on Fourier's "memoir" presented to the French Academy in 1807. Others who "anticipated" the DFT mentioned are Euler, D'Alembert, and Bernoulli. Clairaut is mentioned as probably one of the earliest discoverers of the DFT in 1754.
Ohm's Law
Everything has to start somewhere. Modern network analysis arguably starts with Kirchhoff. Kirchhoff was inspired by Georg Simon Ohm, the discoverer of Ohm's law. How does one discover a law like Ohm's if there are no voltage meters, no reliable or standard voltage or current sources and no standard resistors? It was believed that if there was something like a resistance, which was debated, then it would be a dependency between current and voltage that was described by a logarithmic relationship.
The story of Ohm is actually a fairly dramatic one. And considering the importance and the brilliance of the discovery it is a fairly unknown story. Joseph Keithley in his Electrical and Magnetic Measurements book provides an outstanding essay on Ohm.
One may find more information on Ohm and Ohm's law on this Wikipedia website. The part that caught my eye was the statement that Ohm "He used a galvanometer to measure current..." The Ørsted effect, showing that the deflection of compass needle depends on a current, was discovered in 1820. In the same year Johann Schweigger built the first galvanometer, also called a multiplier or multiplicator. See this website.
Excuse me.....when did this happen? And how?
Most of us know when something was invented. If not the exact year, then at least a reasonable time frame, let's say within 20 or 30 years give or take. Ohm's law for instance is from 1826. His reliable and actually quite consistent and repeatable power sources are thermoelectric elements. In 1826.
What about the use of the first industrial steam engine? That was in 1712. And not sometime in the early or mid 19th century as many people believe. It was invented by Newcomen. The first steam engine was an absolute brilliant piece of engineering and a demonstration of an almost unbelievable grasp of scientific concepts reduced to practice. Contrary to what most people will tell when asked how the first steam engine worked, the engine worked under atmospheric pressure. No technology existed at the time to create sufficient high pressure steam. A model of a Newcomen steam engine is shown at this website. The model is available as a kit. The website also shows a video of the working model. The striking feature is the asymmetrical operation of the engine. A nice description of the Newcomen engine can be found here.
The first transatlantic telegraph cable? That one was completed in 1858. The first transatlantic telephone cable was not realized until 1956, almost 100 years later. The technology for voice transmission was much more of a challenge and radio transmission worked quite well and was cheaper.
Formal switching expressions
While Boole is famous for Boolean algebra, he was actually best known in his time for his methods of solving differential equations. The Boolean algebra as applied in switching expressions is an invention by Claude Shannon and presented in his Master Thesis in 1936. Shannon provides several example circuits such as a counter, a ripple adder and a factor and prime table generator including a control circuit.
Fly away! Yes, but how?
One of the most difficult physical concepts is the concept of lift of an airfoil. Furthermore, there appears to be no agreement how lift is actually created. A search in Google on "lift" and "airfoil" will provide a range of explanations by different, apparently very smart people who tend to call each other misinformed (and that is when they are being nice).
Lift can be explained very well in mathematical terms (using vectors for instance). However, it is very difficult to provide easy to grasp physical concepts that makes one say: aaah, that is why it works. Circulation is one such concept that provides an explanation, but is difficult to visualize. I believe it is our knowledge of flight of birds that convinces us that there must be a simple explanation how flight works. We see birds fly, so flight is possible. Surely, birds are not smarter than people, so even though we cannot fly, we can at least provide an explanation how the birds fly.
Most people accept fairly simple and in hindsight unsupported theories as explanations.
If you are interested in how the theory of flight was developed I recommend the excellent and very easy to read book "A History of Aerodynamics" by John D. Anderson. One very good website is John Denker's site He introduces the "flying barn door" to illustrate that camber in a foil is not required for lift.
Lift of an airfoil, such as by wings of birds, is a known natural phenomenon. Many people realized that some form of lift had to play a role in flight. However, it appeared to be unclear how lift was generated. There was no clear or apparent physical or mathematical model to describe and quantify lift. Lift is a natural phenomenon, so it is not a proper invention. According to John Anderson in his outstanding book "A History of Aerodynamics" Leonardo da Vinci was one of the first (if not the first) to identify generation of lift as a separate and important aspect of flight. Da Vinci's explanation was flawed, but he appears to have been the first to suggest fixed foils or wings to generate lift.
The first scientist to provide an accurate model that enables one to calculate lift generated by an airfoil was Joukowski in 1906 based on the concept of circulation (or circulatory flow) over an airfoil developed by Lanchester as early as 1890.
The circulation will generate a vortex at the end of the wing that will be "left behind" by a moving airplane. A great picture of an airplane generated vortex is shown at this Wikipedia site.
The difference engine
Doron Swade in his excellent book 'The Difference Engine' describes the difficulties to create a copy of Babbage's Difference Engine no. 2 for the celebration of the Babbage Centennial. Many of the problems are money related. A great deal of frustration was created by the tight tolerances of the dimensions of the parts for the machine. A big frustration was the often occurring lockup of the machine. Anyone who has ever worked on mechanical calculating machines can probably sympathize with the feelings of frustration that the builders must have experienced every time the engine starts working, going through a routine, only to lockup close to achieving a result.
Two examples now exist of models of a Difference Engine where an approach has been taken where a more relaxed coupling of parts has been applied. The first example is a small scale model of a Difference Engine created from Meccano by Tim Robinson: see the website http://www.meccano.us/difference_engines/rde_1/ . I urge people to take a look at the video of the operation of the machine. The second example is a model of a Difference Engine created from LEGO parts by Andrew Carol: http://acarol.woz.org/ .
Both examples demonstrate that relatively simple parts can be used to realize this complex machine. Of course, these simple parts were not available to Babbage. Still it demonstrates how mechanical computing was well within reach at Babbage's time. As with the original Newcomen steam engine, technology for the parts was not geared to efficient manufacture. However basic insights were developed and available and enough technology was ready to be used in 1712 as well as in 1832.
Complex is more likely than simple and not necessarily engineered
Engineers are trained to create technical solutions that are efficient and perform their task with for instance the fewest possible components. That is why certain structures and circuits are recognized as being 'engineered.'
We are familiar with the concept of building complex constructions from simple building blocks. An inherent assumption behind creating complex structures is that availability of a set of certain primitive building blocks is required. When we analyze (or reverse engineer) the complex construction we should find the primitive building blocks. This is such an elementary idea that it is probably for most people beyond trivial.
Virtually all our thinking and analyses of naturally occurring phenomena is based on finding the simplest element, the simplest and smallest particle or expression. When things are complex we want to reduce them to their simplest representation.
One example of such an engineering approach is in the design of digital circuitry. Herein one may apply a representation of all states of a circuit using primitive elements and eliminate all parts that do not contribute to a required state. Karnaugh diagrams are used to minimize circuitry.
The opposite approach is to use the maximum number of different digital functions to create a simple digital design.
A fairly complex digital design is the one that calculates a sum of two binary numbers. Such an expression has to calculate a residue as well a carry for several cycles if the numbers comprise multiple digits. The smallest addition of two single binary digits involves a XOR and an AND function, which may be considered the primitive building blocks of a ripple adder, which is what the name is of a full adder expression. The simplest expression for a ripple adder is for the full addition of 2 binary digits, which involves determining one residue and one carry digit.
So what, if one has run out on XOR and AND functions? The next reasonable step is to create the XOR and AND function from adequate connectives, such as NANDs. What if there are no NAND functions? It turns out that at that stage, one is not able to create the simplest of simple ripple adders, which is the addition of two bits.
It reasonable to assume that an addition of two words of two bits with a ripple adder is more complex than an addition of two bits. The reason for that is that additional layers of interconnected logic have to be provided.
One simple, but time consuming experiment, is to create and run a software program with the correct structure of the 2 by 2 bits ripple adder (or any n by n ripple adder) and start applying any of the 16 possible single binary logic functions (and not the adequate connectives) but not the XOR and AND function. One has to check all possible results of addition against the known correct result, which is time consuming. Chaos is to be expected, right?
The surprising fact is that the more complex expression of multidigit ripple adders can be and are created from other than binary XOR and AND single functions. So while it may not be possible to use certain functions to create the simplest device (the single digits tipple adder) it is possible to create the more complex expressions.
From a logic perspective: complex is more likely than simple! Furthermore, the more complex solutions are not what we would generally consider to be engineered solutions.
The art of multiplying
There is a continuing discussion about the teaching of arithmetic at school. This discussion focuses on different aspects, which roughly can be set in the following categories:
1. the benefit of memorizing the arithmetical tables of addition and multiplication
2. focusing on applications rather than performing the steps of for instance multiplication
3. improved arithmetical algorithms
The third element does popup now and then. It mostly focuses on rearranging existing algorithms, not on teaching really different ones.
In the Netherlands in the 1950s, we were trained at school to memorize the tables of multiplication and addition. In third grade a monumental event was the learning of the tables of multiplication from 1 to 10. My teacher was Mrs. Kobes who made us recite individually the 10 tables of multiplication in front of all other kids in order to earn our "Table Diploma." My recollection suggests that everyone succeeded in obtaining said diploma. Looking back, I believe this to have been a great opportunity of achievement for everyone in class. No matter what happened, when you had your Table Diploma it proved that you were literate in arithmetic. Mrs. Kobes offered to the daring folks the opportunity to obtain an Advanced Diploma for reciting the multiplication tables of 11 to 20. I do not recall many succeeding. I did not try. Mostly people got stuck at the table of 14 or 17, starting to do quick addition, which usually failed and took too much time, which automatically disqualified you.
People often argue that learning tables does not provide real knowledge and that it should be sufficient to know what multiplication is and what it does and then use a calculator. Strictly speaking that is correct. Almost everyone who has to perform a complex calculation will grab a calculator or use a spreadsheet program.
I believe that a critical element in arithmetic is the carry digit. The use of the carry digit is the enabler of our way of performing additions and multiplications. It is truly algorithmic and has to be learned.
The human brain, I believe, is not well equipped to actively conduct an addition or a multiplication. Both operations are a form of counting if we associate a number with objects. However, in arithmetic numbers are merely symbols, and addition and multiplication are table driven operations, generally with two inputs and one output.
Humans have no circuits that perform these tables: we have to memorize tables and use them in our arithmetic. In that sense calculators are superior in performance: they perform the steps of addition and multiplication every time, based on a switching approach that does not require remembering what the table was. The tables (certainly in binary arithmetic) are inherent to circuits: usually the AND and the XOR circuit.
We are so used to the algorithms of addition and multiplication, that we do not realize that it is a true algorithm based on memorized truth tables. Most of us also do not realize that there are different algorithmic ways to perform a fundamental calculation such as addition and multiplication.
The basic addition is a modulo10 addition of 2 decimal numbers, with a result smaller than 10. For this type of addition no carry will occur. One may also consider other additions, for instance modulo100, as fundamental. This means that one names a resulting sum such as 16 (sixteen) as a unique identifier and not as a combination of "radixn place dependent" digits. Unique designations have limited value, because of the wide range of number that one has to work with.
All of this appears to be trivial beyond words. I suggest one tries to do some basic calculations in multiplication in hexadecimal representation, and how difficult that is without paper and pencil.
My argument here is that nothing in arithmetic is easy or natural and almost everything requires significant memorization and practice. I believe that only an exceptionally small portion of humankind is able to be provided with a rule and then apply it with certainty to problems. Most people have issues with "understanding" the rule: how it is applied, when it is applied, what the parameters are that have to be applied, and why it is applied. They need practice. It has no use to provide a student with an automated tool to determine a finite integral, if the student does not know what it is. This applies to practically all concepts in algebra, analysis and calculus, of which the latter applies an abundance of geometrical examples to illustrate the concepts.
It would be ridiculous to tell a 7year old that addition is really a form of symbol processing: a combination of truth tables and work flow. Now, go ahead and do additions.
It requires practice and a familiarity with applying the rules that are beyond any doubts or uncertainties. Only then does a person (child) experience and know what a mathematical rule is. A mathematical rule as taught, is an absolute rule that is not open to negotiation or trickery such as human rules like "one should not steal."
It is a rule that provides the same results every time. It does not matter where you apply it or when you apply it. It does not matter if you apply it supervised or unsupervised. It does not matter if you apply it at home or at school, today or tomorrow. How can one obtain such certainty? By practice!
For many people, myself included, problems with applying mathematics were over, or were greatly reduced when it was realized that all it is based on is rules. Get the rules, and apply the rules. (which, unfortunately is different from coming up with new rules)
The basic approach in addition is the ripple adder. It is the human approach as well as the machine approach. It contains two types of elements: the switching function elements (the modulon addition, and the generation of the carry) and the rule or flow of steps to arrive at the sum.
The machine ripple adder has as a bottleneck the propagation of a carry (the ripple) through partial sums, which delays the machine determination of a sum until the ripple has been completed. An example is 999 + 111. Humans are much better (for addition of two numbers) to predict how the ripple is going to propagate a carry. It seems that the human mind operates exceptionally well in pattern recognition.
One may also provide a machine with instructions or algorithm how to shortcut a carry ripple. This is known as Carry Look Ahead (CLA) or carry prediction. A good description can be found in Wikipedia. There are different schemes to implement CLA adders by applying different flows of instructions. One such design of a machine based CLA adder is the Brent Kung adder, first described in the article by Richard Brent and H.T. Tung in the 1982 article "A regular Layout for Parallel Adders." Other Carry Look Ahead schemes are the KoggeStone scheme and the Sklansky scheme for conditional carry calculation. On top of that one may postpone calculating a carry by using the CarrySaveAddition. Wallace tree multiplications can limit the delays in determining a product. A totally novel manner of multiplication by dramatically limiting the number of partial products can be found here.
All the above approaches provide valid addition and multiplication algorithms, which are in many ways counter intuitive. The reason why these algorithms work is the use of multidigit input numbers. Trying to get one's mind around these algorithms and to grasp their meaning or structure is quite a difficult task. I believe it is comparable to a child being instructed to learn to add or to multiply. Just telling it: "this is the rule, now apply it" is not sufficient.
It is comparable to the experience of high school students to calculate 1/a + 1/b. A not inconsiderable number of students will tell you that it is 1/(a+b). Even grownups will assert that as the result. The consequence of this can be devastating and prevents a student from further developing mathematical skills. It undermines the confidence of a student in his or her capabilities as this problem will eventually popup as a relatively minor part of a larger exercise. It keeps the student in limbo, aware of a rule, but not entirely sure why it is so and why, when and how it is applied. It creates a belief that mathematics is arbitrary and that it requires some mystical insight to apply it correctly. As a former math teacher I have spent significant amounts of time in remedial teaching on this seemingly minor subject. It was almost comical to experience the relief of students when they finally grasp the issue and when it turned from insurmountable to trivial.
One may check the above arguments by reviewing the 4th grade assessment of the Trends in International Mathematics and Science Study (TIMSS), especially the one about Mathematics Concepts and Mathematics Items. Also, give these exercises a try. Try to go back in your mind to 4th grade and try to answer the questions from that perspective. The questions if answered correctly would reflect excellent language, mathematical and reasoning skills. If , however, one misses the basic arithmetical skills, the whole building comes tumbling down so to speak.
Missing the ability to perform the most basic of calculations like multiplications has a similar effect on students. Almost no kid has the confidence to say: I do not have the skills to solve this problem with pencil and paper, but if you provide me with a calculator I will solve it for you. In my experience that is not going to happen. [ P.S. I wrote this paragraph in the context of the TIMMS tests. Many kids of course do multiplications by calculator when provided with a multiplicand and a multiplier. The above was meant to focus on the process to determine what the multiplicand and multiplier are in a stated problem. If you have no clue on what these concepts are, then no computer or calculator is going to enable you to solve a problem.]
My conclusion, strangely enough, is that math tests may be too hard or rather: focused too much on nonarithmetical aspects. Try the fourth grade test for yourself and try to remember what your skills were at age 10. If a student achieves a high score on the test, he or she is clearly smart and probably good in math. However, I am not sure that a lower score (not a very low score) reflects bad arithmetical skills (though it may). It probably reflects bad or underdeveloped language skills and reasoning skills. As argued above, arithmetical skills are probably fairly easy to learn. However, it seems that no one is focused anymore on the basic arithmetical skills. The focus is more on "practical applications." Trying to improve the scores on the present tests may prove to be a losing battle. We are training students without giving them the proper basic tools and confidence. The way to improve scores may be to go back to arithmetic's basics and test those basic skills. The education and testing on reasoning skills may actually better take place in language class. Language class is what I believe, a highly underestimated, and probably the most efficient part of school, for providing students with strong logic and reasoning skills.
Mathematical skills are 50 percent language skills and 50 percent "rules" skills. Arithmetic is close to 100% rules skill.
I recently discovered the existence of a Dutch movement that supports "Opa Rekenen". Which means "Grandad Arithmetic." It is a response to the movement of "realistic arithmetic." An excellent article by Professor Jan van de Craats on problems in arithemtic teaching can be found here. The article is in Dutch but has some exercise samples that need no translation. Related slides in English with great examples of "Why Daan and Sanne can't add" can be found here.
The Human Brain has no ALU
Anthropomorphism is to attribute human traits to animals and machines. We are now so familiar with a structure of a computer or processor that we often apply a reverse anthropomorphism to the human brain. In many cases terms and concepts are applied to the human brain that are borrowed from computer science. The use of "memory" such as shortterm memory (which would be a type of RAM) and longterm memory which is a kind of set ROMtype memory which is hard to erase.
Despite the different analogies that appear to work, we (at least I) do not have an ALU or an Arithmetic Logic Unit. An ALU distinguishes itself from a programmable unit that it is hard wired. It is prewired to perform operations such as additions and multiplications. No programming is required for an ALU to do its job. One only has to provide the operands on an input for the ALU to do its job. In other words, one does not need to program an ALU to do an addition. It already knows how to do that.
We, or animals, do not have any ALU capability, which is a purely mechanical or mindless capability. It is just a series of preset and hardwired instructions that have to be performed on operands. Furthermore, an ALU is task specific, targeted towards arithmetic operations. For instance, a device may implement a digital filter which is heavy on multiplications. In such a unit it is beneficial to have a special ALU do the repetitive work.
This, to me, directly implies, that the mechanics of arithmetic have to be taught and learned and memorized, as we all are very much aware of. Arithmetic is a mechanical basis for solving problems. You generally cannot learn the mechanics of arithmetic by trying to solve problems. Only very few people have that ability. To expect that capability in children is......!
Knowledge and the structure of the brain
How can we know? And is knowledge empirical or is some knowledge, such as mathematics innate to the human mind, or does it even exist outside the human mind?
Knowledge and the acquisition of knowledge is described in the very readable and very well illustrated book Introducing Empiricism by Dave Robinson and Bill Mayblin.
Often "2+2=4" is used (also in this book) as an example of analytic statement (rather than a synthetic statement), because it is "[t]ells us nothing fresh about the world..." (Introducing Empiricism page 133). "It is just a convenient way of telling you that 1+1 + 1 + 1 = 1 + 1 + 1 + 1."
I used to agree with the above, which in a nutshell tells us that all logical or mathematical knowledge is a priori, which idea, it seems, originates with Kant.
However, after working in nstate switching and multivalued logic I am changing my mind.
First of all the two statements "1+1+1+1 = 1+1+1+1" and "1+1+1+1= 4" are two entirely different statements, though both statements have a common structure, using the equivalence sign. Both statements relate to symbol processing.
Symbol processing appears to be something that most animals, including humans can do very well. We can of course not know if animals can think. However, animals appear to be able to categorize sensory data such as images, for instance in categories such as "danger" or "food" or "don't care".
The statement "1+1+1+1 = 1+1+1+1" is an input/output statement wherein a first series of symbols (1 + 1 + 1 + 1) is compared with a second series of symbols (1 + 1 + 1 + 1). Some mechanism in our brain makes us decide that the first and the second sequence are identical. It may actually be related to a form of pattern recognition, wherein we perform comparison of signals representing the sequences. It should be clear that we do not compare images, as images do not exist in our brain.
Another example is "1+1+1+1 = 1+1+11", for which we also can generate an immediate answer without actually evaluating the expressions.
The expression "1+1+1+1= 4" is of a totally different nature as this generates a symbol (4) as a result of an addition (+). In fact, one may say that the brain implements a logic table related to '+'. By inputting 4 symbols '1' the symbol '4' is generated. This is a physical phenomenon, not an "a priori" concept. In fact as an "a priori concept" the expression "1+1+1+1= 4" is meaningless, as both sides of the '=' supposedly are the same or express a tautology. We actually have no real understanding of a priori knowledge, as we have to apply "rules" or "switching tables" to evaluate expressions, so we can say in hindsight that expressions comply with a rule. So, what we call "a priori knowledge" actually reflects a physical structure of the brain.
Quantum by Manjit Kumar
The life of Albert Einstein and his theories have been extensively covered in the literature for the interested layperson. His interactions with his contemporaries and how he disagreed with existing interpretations of physics is less accessible in the nonprofessional literature.
The book Quantum by Manjit Kumar changes that. Quantum provides a captivating and very engaging narrative about the development of particle physics in mainly the twentieth century.
It is useful to have some understanding of physics, but nothing beyond high school level is required to follow the unfolding story. Mr. Kumar very well explains the concepts that are being developed, and one does not have to be a rocket scientist to follow the story.
What makes this book so engaging are the descriptions of the scientists who play a role in the story of quantum mechanics, and especially why they made their steps in physical theory. One may know the names of the scientists, but, like me, not why they developed specific theories and how they communicated and interacted with their peers or (in many cases) the authorities in their field. Planck, Wien, Pauli, Heisenberg, Dirac, and Bell are among them.
In the end the story is really between the interpretation of reality of Einstein and Bohr. The name of Bohr is of course relatively well known as the Danish physicist who created a model for the hydrogen atom. I personally did not know much else about Bohr as a person. I am not sure whether this was the intention of Kumar, but Bohr in this book actually comes across as a rather annoying person. Very well aware of his own status and importance, keen on his own comfort, but caring little about the comfort and privacy of his underlings, with an annoying habit of badgering his opponent with constant and unrelenting discussions and opinions. He is what is called in German "rechthaberisch."
Kumar bends over backward to explain concepts and tries to make the science not a barrier in following the story. He succeeds amazingly well. There are some strange mistakes in the book. For instance, Mr. Kumar maintains that Maxwell wrote the set of 4 Maxwell equations, which he did not. For those who would like to view the original 20 Maxwell equations (yes 20) can see them here.
Elsewhere, Mr. Kumar maintains that the square of a complex number is a real number. This is of course not true. For instance (x + yi)*(x + yi) = x2 + 2xyi y2 is again a complex number.
There are some other parts that were not clear to me. I suspect that it may not be possible to simplify some of the concepts without using some formulas, which Mr. Kumar largely succeeds to achieve, and in some parts the author just runs into the limitations of only using words. However, the book succeeds very well in providing a cohesive and understandable narrative of the history of quantum mechanics. I especially liked the later parts and the description of the impact of the "uncertainty principle" of which I was not aware. The book reads like a novel, a great novel.
The Need for a Simple Cheap (Free?) Programming Language
As a practicing inventor I usually come up with ideas that require some form of validation, often by a computer program. The simplest form of validation is usually some straight forward calculation. In more complex situations there is generally a requirement to evaluate possible alternatives. In other inventions it would be beneficial to process data from different sources.
I usually work in Matlab, which is a versatile but also expensive programming environment. An alternative is Visual Basic, with as benefit a great interactive environment and the ability to create executables. Both programming environments have as a drawback the requirement of knowledge of instructions, the components environment and the required procedures of creating a program. Matlab is probably the easiest to learn, though it has a very extensive set of special instructions.
I am actually appalled at how difficult it still is to learn a programming language and environment. While the visual interface of programming languages has greatly improved, I find it still difficult to learn a new programming language. Not only that. Even after I have mastered a language and have not used it for a while, let's say over a period of 6 months, it seems that I have to start the leaning process all over again.
Every language has its own notation, set of definitions and procedures. And none are actually intuitive.
My earliest experience was with Algol. To my surprise, not very much has improved over the years. There is still no natural language interface that lets me easily create a flow of instructions on different datasets and present results in an easy to understand and visually attractive way.
I used to program in APL, which was (is?) famous for its powerful instruction set. I remember that we sometimes had contests to condense intricate instructions into the fewest possible steps. These programs were almost impossible to analyze.
One current alternative, which is easy to learn, is compatible with Matlab and is free is Freemat. You can find information on Freemat here.
Hiding in Plain Sight
One of the more intriguing developments by Heaviside is the coaxial cable and the patent that he obtained. This patent (GB 1407) is mentioned in all the leading biographies and technical books about Oliver Heaviside. My impression was that I would have no problem finding a copy of the patent online. Nothing of the kind. I did numerous extensive web searches, mainly using Google, and came up empty. One obvious source was the UK Patent Office (UK Intellectual Patent Office) or even the European Patent Office. But, at least I was not successful.
Finally, I found a copy of the Heaviside patent here on (of all places) on the website of the German Patent Office. And not through a search of the site, but by going through an obscure folder listing of documents.
The patent is very much worth studying. Not only for the coaxial cable, but for its other solution to eliminate inductive effects. I would say that the provided solution is a typical Heaviside one.
To my surprise, it is still very difficult to find a copy of this patent by conducting a simple Google search. It is definitely there. Hiding in plain sight.
How does a Patent Promote Innovation?
The US Constitution states "The Congress shall have power...To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries."
Many people, in one way or another, argue that a patent actually prevents innovation, because it limits others from using an invention that is protected by a patent. While people in general agree that an inventor should be compensated for his invention, they find that it is in some way unfair that someone can be prevented from using something that is fully described and publicly available.
For instance, how can one be prevented from using Heaviside's method of solving differential equations, after it has been published? In fact, one cannot be prevented from doing so. But clearly, the availability of the Heaviside method, while widely used for several decades, did not provide an incentive to further develop or improve on the method. It took until the 1930s/1940s to develop a more rigorous and theoretically simpler approach using Laplace transforms.
The genius of the US Constitution in providing patent protection is not only to provide the inventor with a way to collect money on his invention. Patent experts will tell you that a large percentage of inventors never collect a penny from patents. But patents provide an effective barrier that has to be overcome by potential infringers. It plays to the human nature never to give up anything voluntarily (in business). It thus stimulates a person or an organization to conduct research that overcomes a patent and/or that creates something so desirable that owners of competing patents want to apply it.
From a perspective of stimulating active research and innovation we have far too few inventors that apply for a patent. The process is geared towards using existing technology and overcoming patents by legal steps.
I believe that future economic success will be based in a large part on development and ownership of Intellectual Property. Apart from ownership of minerals, it is now clear that one true competitive resource of an economy is IP and its capability to generate , protect and apply IP.
Adequate Connectives
A functionally complete set of logical connectives of Boolean operators is one which can be used to express all possible truth tables by combining members of the set into a Boolean (binary) expression. See for instance the article in Wikipedia. The single elements of {NAND} and {NOR} functions are functionally complete and are the adequate connectives.
The NAND can be formed from {NOT, AND}. The set {NOT, AND} is thus also a set of adequate connectives from which all binary functions can be formed. The NOT is a binary inverter which can be expressed as [0 1] → [1 0]. In words, a state 0 is inverted to state 1 and state 1 is inverted to state 0.
While commonly only [0 1] → [1 0] is mentioned as a binary inverter, there are in fact 4 binary inverters:
i1: [0 1] → [1 0], the standard inverter.
i2: [0 1] → [0 1], the identity.
i3: [0 1] → [0 0], the always 'off' inverter.
i4: [0 1] → [1 1], the always 'on' inverter.
Under a new definition a set of adequate connectives is {i1, i2, i3, i4, BIN} or a set consisting of at least one of the 4 binary inverters and one binary two input/single output function.
What are then the sets of adequate connectives? Well there are 8 of them, for sets of inverters with one of 8 qualifying BIN functions. What are those qualifying BIN functions? The qualifying functions BIN have and odd number of 0s (and thus of 1s). See in the following diagram.
The above configuration is the universal representation for the binary adequate connective. A single function BIN is unable to generate all 16 functions. A set of Matlab/Freemat programs can be found here that evaluate all 16 binary functions BIN. Download and extract the programs into a single folder and run 'makebinfun' under Matlab or Freemat.
The ternary or 3state universal connective
One can find similar universal adequate connectives in 3state or any nstate switching machine logic. In the ternary case there are 27 3state inverters, of which some are listed below:
[0 1 2]→[0 0 0];
[0 1 2]→[0 0 1];
[0 1 2]→[0 2 1];
[0 1 2]→[2 1 0];
The following figure illustrates one possible universal 3state adequate connective under the new definition.
The above configuration using TER and the 3state inverters is able to generate all 19,683 3state or ternary 2input/single output functions.
The US Patent System is under Attack.
How is an inventor to respond?
There is an old European saying that you can always
find a stick to hit a dog. Or, you can always find a reason to hurt
or attack someone. (though it is unclear to me why one wants to hit
a dog).
A number of people and interest groups in the USA don’t like our
patent system, which is under attack. The blog IPWATCHDOG is documenting
the ongoing efforts underway to weaken the patent system. (see for instance
http://www.ipwatchdog.com/2015/06/15/patentreformfuelsfearparalyzesinnovaitonmarket/id=58743/
)
Especially “software” and “business methods”
patents are a source of irritation to these people and groups. To come
back to the above “stick”, it should be noted that some
of software/business method patents and patent claims that have been
subject to review in court are actually examples of bad patent or claim
drafting and have been used to establish new precedent that has weakened
and invalidated many patents.
I do not want to discuss if there was an actual invention in these patents
(I believe there almost always is). However, US Patent Law sets very
clear rules on the requirements for valid patents. The bad patents and
bad patent claims have clearly been used as a stick to attack the US
patent system and software patents in particular, though other types
of patents are also affected.
For a considerable part, this is a political issue that has to be resolved
in Congress, not in the courts. However, under the current antipatent
atmosphere, inventors and invention owners can take some preventive
measures by filing “old school” patent applications that
are heavy on structure, technology, novel and nonobvious results and
a clear eye towards defensible patentability. This means that the evidence
for arguments on patentability should be in the specification. While
this seems to be fairly obvious, the actual practice is not always the
case. Especially the rejections of inventions as being “abstract
ideas” require a better technological description in the specification
and better “technology” claims.
This requires a patent drafting approach that relies heavily on “patent
engineering” which can be achieved by having experienced engineers
(patent engineers) involved.
Glen Diehl and I often have discussed this approach based on many patent
cases that we both were involved in. Glen Diehl decided to put his ideas
on “patent engineering” into practice and founded PATENT35,
which is focused strictly on drafting patent specifications and claimsand
cutting away the expensive “nonsense.” Take a look at www.patent35.com.
The best way to prepare for an attack on your patent is to have a patent
specification of unassailable quality. Remember, you may be able to
repair (=amend) a claim, but you most likely will be unable to repair
the specification.
Patent Basics
As a Founder or Executive of a Startup Company you may have the need
to seek patent protection for important Intellectual Property. You may
have to do it in preparation for a meeting with potential investors.
The problem may be that you have some idea what a patent does (it protects
an invention) but you have really no firm grasp of what a patent actually
is, what the process is to obtain a patent and what the potential cost
are.
You are not alone. I have been there too, with many others, I am sure.
Based on my own experience as an inventor and patent engineer I have
created a brief presentation as a mini crash course on the US Patent
System. It is absolutely not a replacement for professional advice by
a patent attorney. However, it may provide you with a very basic insight
into patents so you get to have at least some basic knowledge. It provides
at least some Patent Basics for the Complete Novice in Patents.
I developed the presentation under the auspices of PATENT35.
You can find the
presentation here.
Do We Live in a Computer Simulation?
The issue of the world being a computer game received broader attention by Elon Musk’s: Humanity ‘Probably’ Living in a MatrixStyle Computer Simulation.
A recent article by Clara Moskowitz in Scientific American deals with this subject. www.scientificamerican.com/article/arewelivinginacomputersimulation/.
While intriguing, I believe that such an assumption is based on a lack of current scientific knowledge and strong influence from the current and not fully developed state of computing. It is not unlike the old beliefs that gods were angry when thunder and lighting occurred, as no scientific knowledge existed on these physical phenomena.
I do not believe that we exist on somebody’s hard drive or that we are basically a computer game or that we are autonomous apps.
Let me first address digital filters. Digital filters are calculating machines that have a digitizing input device (A/D converter) and a signal generating output device (D/A converter) and that under specified conditions (NyquistShannon sampling condition) act as a physical electronic filter. That is: coils, resistors, capacitors, connections, opamps and the like are replaced by digital calculators. There are also adaptive digital filters that adapt performance to changing external conditions, such as noise.
This strongly suggests that there is possibly a relationship between some structure that executes steps in accordance with mathematical expressions (which is still a device or structure) and what we see as the perceived real world. A critical part herein is the interface (the transducer) between the “calculating device” and what we see as the physical world. In fact, as a user you may only see the effects of the interface, but remain unaware of the “calculator.”
The mathematical expressions of the “digital filter” do not merely represent the physical reality, but they are the physical reality in a different dimension, so to speak. A representation of an electronic filter that we usually are familiar with is for instance the socalled “transfer function” which describes the frequency behavior of the filter. However, execution by a computer of the formula that describes the transfer function will not perform the function of the filter. This is like calculating E=mc2, which will of course not generate the energy that the formula represents.
The above suggests that there may be a relationship between the physical world that we observe and a structure (on a different currently not observable level) that acts like a digital computer. That is: there may be a 1to1 relationship to that “computing” structure and the observable physical world. The relationship is expressed by A/D and D/A interfaces (or natural transducers) that we currently are not able to explain. It is somewhat like the physical body of living organisms being expressions of DNA. There clearly is a mutual interaction between DNA and its physical expression, which we call evolution. Antennas and radios are examples of interfaces with otherwise undetectable e.m. radiation, though not necessarily calculating structures.
I speculate that we may find interactions between what we see as the physical world and an overlaying “calculating” structure, which may be a relationship between “string”like elements. And rather than interacting with the “physical world” we may try in the future to modify the “calculating structure.”
I also believe we really still have no firm grip on the relationship between “calculating models” and reality, consciousness and autonomous computers, for instance. Looking at the primitive ways we have to program computers we still have a long way to go. And I really doubt that we will find a puppet player who has programmed our world. That is an atavistic “deus ex machina” notion that reflects our lack of understanding that we have to learn to suppress. There are more than enough speculative hypotheses before we have to turn to a higher power.
Copyright 2008, 2009, 2010, 2011, 2013,
2015, 2016 Peter Lablans. All rights reserved.
