top of page
The Codess

Algorithms are nature reimagined?

I think a lot of the fear of new technology (aside from the questionable morals of the possible users of said technology) comes from a lack of understanding of how this technology works and where it comes from. It should be noted that history is my least favorite subject, but in this case, it sheds light on the foundation of current AI such as ChatGPT. The original ideas for these extremely complex algorithms take some of the punch from the unknown. In my research, I've found that a common thread for many of these algorithms is that the idea originated from an observation of nature. After a few examples, you may find that these algorithms aren't as sci-fi as much as they are just science.


A good place to start would be the simplest form of machine learning: the perceptron.

One of the first perceptrons was used to identify patterns from a stream of bits so that it could predict the next bit. Bits are 0's and 1's and it's how computers understand and store information in memory. When you see the tiny green 0's and 1's falling down the screen in The Matrix or some hacker movie, those are bits! Fun Fact: 8 bits is considered a byte. The basic perceptron comprises an input layer that takes in the data, sums up weights, an activation function, and an output. Here's an example: you send in a picture of a dog (input) and want to know if the image is "a dog" or "not a dog". The machine weighs certain characteristics such as "Are the ears pointed?", "Does it have a long snout?", "Does it have four legs?", "Does it have a tail?". These are represented as values known as the weights that are then summed up and passed through an activation function that decides "Is this an image of a dog or not?". This may look like "If the sum of the weights is more than 10, it is a dog. Then this is outputted to the user. Pretty neat!


So, this first perceptron took in a pattern like "0001 0010 0100 " and was able to spit out the next bit in the sequence, which is "1". And this model was made in...1959. The idea for a perceptron is even older. It was originally thought up in 1943 by neurophysiologist Warren McCulloch and mathematician Walter Pitts. They studied how in the brain, thoughts occur when electrical impulses are sent from one neuron to another via a synapse. They theorized that an artificial model could be made with electrical circuits. It's crazy to think the first electric-only computer was made only 6 years prior in 1937! When you think of the seeds of AI being almost as old as the beginnings of our modern computers, it begins to put things in perspective.

Another great example is Genetic Algorithms which are taken straight out of Darwin's book! Genetic algorithms are used for optimization. We initialize a random "population" of points and determine their fitness value. Darwin used fitness to describe how likely the parents would pass good genes to children for survival in the wild. In computers, fitness is evaluated by some mathematical function. Then we randomly select parents in the population to perform "mutation" and "crossover" to create "children" who will be the parents of the next generation. = This process is repeated until the set conditions are achieved or a predetermined number of generations is reached. Let's say we start with 2 parents, [1,2] and [3,4], and we want them both to be [1,4]. We perform a mutation on the first parent making it [1,7] and a crossover on the second to be [1,4]. The fitness of each would be 0.5 and 1.0, respectively. This means the second child would survive. This type of algorithm has been useful for many things, such as solving sudoku puzzles!




For the last example, we'll talk about a subject more in mathematics - Earth Mover Distance (or Wasserstein Distance). In 1781, a french guy (Gaspard Monge) wondered how to get a pile of dirt from one spot to another in the least amount of steps possible. While we know how to move one solid object from one spot to another the fastest, (like pushing a chair to the other side of the room) moving a distribution of an object (like a mountain of sand across the beach) is much more difficult mathematically. This is what becomes known as the optimal transport problem. The Wasserstein metric measures how best to move one distribution to another with the least cost. It has many uses ranging from particle physics to generative adversarial networks to image denoising.



Science is an art, in my opinion. It’s the art of taking observation from our world and understanding them to the point that we can apply it to other things. There is no shortage of examples of how we’ve turned natural processes, such as butterfly migration patterns, and turned them into algorithms that can help us solve problem.

5 views0 comments

Recent Posts

See All

Комментарии


bottom of page