The discrete Charm of Continuous Math

Ignacio Francisco Ramírez
5 min readJun 21, 2021

--

The title draws from Luis Buñuel’s movie “The Discreet Charm of the Bourgeoisie” ,whose name in Spanish is “El Discreto encanto de la burguesía”. Pretty straightforward translation, isn’t it? However, “discreet” and “discrete” mean quite different things in English, while both map to the same word in Spanish: “discreto”. Thus, it should be expected that many of us who are native Spanish speakers, even being fluent in English, fall into these kinds of traps.

So what does this have to do with Math?

This article was inspired by a series of events related to the neverending quarrels between Engineers and Computer Scientists. The former are usually heavily trained in continuous math, as their world, the problems they tackle, and the machines they deal with are governed by continuous things: volumes, currents, etc. Computers, on the other hand, are traditionally discrete machines; they deal with symbols, variables and states that more often than not take a numer of finite different values. Binary symbols, that is those that can only have two values (eg., ‘0’ or ‘1’, ‘False’ or ‘True’, ‘yes’ or ‘no’… ‘discrete’ or ‘discreet’), are an extreme case.

Now, on the events:

Yesterday, Michael Wooldridge posted the following comment in Twitter:

Random AI thought of the day: the ML boom has suddenly put continuous maths centre stage in comp sci departments. When I was a student the curriculum was dominated by discrete math — sets, logic, etc. Fascinating to see the pendulum swing the other way…

This comment spurred a long, (friendly) and very interesting thread on the subject. It was then quoted by the recent Turing Award laureate Prof. Yann LeCunn, which also led to another thread of interesting comments:

Which is why the best preparation for a PhD in ML/AI is not necessarily CS but electrical engineering, physics, or applied mathematics.

These posts come a week after I had a friendly argument with a number of CS colleagues. Together with people from EE, CS and the Math department, we recently launched a Data Science graduate program that so far has been a great success. The argument started because I (improperly) posted a comment on Twitter saying that CS students had too little math (actually, not only CS, because CS and Computer Engineering are togethere here). See, I am really not a discreet person.

My point, at any rate, is that everyone, especially those pursuing a graduate degree in any field should have plenty of math, both continuous and discrete.

I wish to illustrate this further with my personal experience, which is somewhat opposite to what the above comments say, but by no means uncommon; I found many of the comments on these posts are from people who had similar experiences.

I graduated as an Electrical Engineer from a 6 year-long (typically 7) “old school” career with insane amounts of math and physics. So much that many colleagues managed to complete B.Sc. degrees in physics, math and CS in parallel. It wasn’t my case.

One day Prof. Gadiel Seroussi came to visit our University in Uruguay. Prof. Seroussi is Uruguayan, but studied in Israel, where he got is B.Sc., M.Sc. and Ph.D. in Computer Science. He was then head of the Information Theory Group in Hewlett-Packard Laboratories, Palo Alto. Their big success was the LOCO-I algorithm, which went to become the JPEG-LS standard. This algorithm was coded and sent to mars in one of the first rovers so that the images it took could be compressed and send to Earth without any quality loss. They were now working on Image Denoising, but their discrete mindsets were just not cutting it. Gadiel, in my opinion one of the most brilliant people I’ve met, was well aware of that.

So he gave a talk about their issues with image denoising. I was there, and being the indiscreet person I am, I engaged in most of the questions that Gadiel threw at the audience. I caught his attention and he offered me an internship at HP Labs where I would work with them on these issues.

Later, this would become the core of my Ms. Thesis.

Now there was I, in 2004, surrounded by hardcore CS guys. I started doing my thing. I had learned about Pattern Recognition during the final proyect for my EE degree. And it went good, except for one thing: my first task was to re-do some baseline experiments done by a guy who was an intern before me.

I was satisfied when I got “approximately the same results”. But that wasn’t enough for my boss: he wanted the results to be exactly the same. It took me a long to understand why. By then, I was struggling to get there. It got to a point where my results were identical to the previous one except for two pixels (in an image of 250000 pixels), and the difference was minimal. Say, I got 250, and the other guy 249. I couldn’t crack that one. Another guy from the group, Erik Ordentlich (now a Senior Principal Scientists at Yahoo) finally came with the answer: there was an addition, a few terms. I had them added like this: a +b +c +d. The other guy had them coded like “a + b + d + c”. Due to floating point technicallities, in some cases, these two expressions (in theory identical) produced slightly different results. Furthermore, after rounding, this could result in a difference of 1. That was a huge relief.

The Image Processing community couldn’t care less about this, I knew that, and they knew it. Perhaps it was a little pedantic, I don’t know and I don’t care. All I know is that I learned a lot from this, and I started seeing the kind of sloppiness that was unacceptable for them but so common for us “continuous guys”. There are relevant consequences to the kinds of oversights I’m talking about, but I guess they are too technical and out of the scope of this article (I’ll be happy to elaborate on this to anyone who is interested).

By the end of my internship, and after a few more blunders from my part (not necessarily technical ones), my experience in continuous signal processing paid off, and we obtained very good results, a paper, and a patent out of this all.

Later we formed the Information Theory Group here at my University, with Gadiel and a few of his former interns and students. There, there was more in store for me. We in EE don’t have a good background on discrete math. So much that I struggled when I took (and later helped teaching) Algebraic Error Correcting Codes, which relies heavily on Galois Fields and discrete algebra, of which I was completely ignorant. Of all the courses I took for my M.Sc., this was the only one where I got less than an A.

I guess I was lucky. I had the chance to learn this stuff when it wasn’t too late.

My final advice is: do yourself a favour and cross the line. If you are from CS, take courses, read about continuous math, especially calculus, measure theory, advanced linear algebra. If your background is EE, take some good courses on discrete math, field theory, logic, graph theory.

It’s always worth it. It will always open doors for you.

--

--

Ignacio Francisco Ramírez
Ignacio Francisco Ramírez

Written by Ignacio Francisco Ramírez

I am an Electrical Engineer working full time as a teacher and researcher in Universidad de la República, Uruguay. Music is my impossible love.

No responses yet