Dhruv On Math     About     All Posts     Feed

A Visual Introduction to Function Kernels

In the last few posts we’ve focused heavily on matrices and their applications. In this post we’re going to use matrices to learn about Kernels.

The Kernel of a function is the set of points that the function sends to 00. Amazingly, once we know this set, we can immediately characterize how the matrix (or linear function) maps its inputs to its outputs.

I hope that by the end of this post you will:

  1. Understand what a Kernel of a function is and how it helps us understand a function better.
  2. Realize that the inverses of output points are always some translation of the Kernel (for linear functions).
  3. See that there are many pretty patterns and coincidences that flow out of the properties of linear functions.

Functions Across Spaces

In previous posts, we noticed how matrices are just linear functions. We found that the matrices we studied just rotate or stretch a vector in some way.

An example of a function that stretches its input. Source: http://developer.apple.com.

But we only studied square matrices (i.e. 2x2 or 3x3 matrices). What happens when our matrices aren’t square?

Let’s start with a 2x3 matrix that represents a linear function ff. Let’s study a function ff defined by the matrix FF:

F=[210013] F = \begin{bmatrix} 2 & 1 & 0 \\ 0 & 1 & 3 \end{bmatrix}

What happens if I apply this matrix on a vector v=[321]v = \begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix}?

Let’s find out:

Fv=[210013][321]Fv=[85]\begin{aligned} F v &= \begin{bmatrix} 2 & 1 & 0 \\ 0 & 1 & 3 \end{bmatrix} \begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix} \\ F v &= \begin{bmatrix} 8 \\ 5 \end{bmatrix} \end{aligned}

In other words, f([321])=[85])f(\begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix}) = \begin{bmatrix} 8 \\ 5 \end{bmatrix}).

So ff effectively takes a point in 3 Dimensions, ([321]\begin{bmatrix}3 \\ 2 \\ 1\end{bmatrix}) and sends it to a point in 2 Dimensions ([85]\begin{bmatrix} 8 \\ 5 \end{bmatrix}). We can see this below:

ff maps points from R3R^3 to R2R^2.

We interact with functions that take points from 3D to 2D all the time. For instance, everytime you take a picture with a camera you are taking a 3D space (the world you see), and collapsing it onto a 2D space (the camera sensor).

A camera converts a 3D object (the tree) into a 2d representation (image). Source: Wikipedia.

The input space for ff is R3R^3 and the output space is R2R^2. More formally, we write this as:

f:R3R2f: R^3 \rightarrow R^2

Losing Information

Returning to the example of cameras, when we take pictures, we squash a 3D world onto a 2D sensor. In the process, we lose some amount of information primarily related to depth.

Specifically, points that are far away will appear close to each other even though they may be quite distant.

Eventually, points infinitely far away on the horizon all collapse onto the same point. We can see this in the example image below.

Notice how the pillars above appear closer and closer together even though they are staying the same distance apart. We have lost information about the distance between this pillars when converting to 2D. Image source: Unsplash.

So just like a camera, will our function also “lose” some information when it moves points from 3D to 2D? Will it collapse multiple points from the input to the same point in the output?

The Kernel - Set of Points that Map to 0

To answer this, let’s start by seeing all the points v=[v1v2v3]v = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} that map onto the origin of the output space - [00]\begin{bmatrix} 0 \\ 0 \end{bmatrix}. This gives us a good starting point for understanding which points from our input hit the same point in the output.

Which points map to [00]\begin{bmatrix}0 \\ 0\end{bmatrix}?

We want to solve:

Fv=[00][210013][xyz]=[00]\begin{aligned} Fv &= \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ \begin{bmatrix} 2 & 1 & 0 \\ 0 & 1 & 3 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} &= \begin{bmatrix} 0 \\ 0 \end{bmatrix} \end{aligned}

Carrying out this multiplication, we see this is satisfied when:

2x+y=02x + y = 0 y+3z=0y + 3z = 0

Solving this for each variable in terms of yy, we find: x=y2z=y3\begin{aligned} x &= -\frac{y}{2} \\ z &= -\frac{y}{3} \end{aligned} So, f1([00])f^{-1}(\begin{bmatrix} 0 \\ 0\end{bmatrix}) is a line parameterized by: x=t2y=tz=t3\begin{aligned} x &= -\frac{t}{2} \\ y &= t \\ z &= -\frac{t}{3} \\ \end{aligned} for some tRt \in R.

This line is shown below. Some points on this line are: v1=[000]v_1 = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}, v2=[362]v_2 = \begin{bmatrix} -3 \\ 6 \\ -2 \end{bmatrix}, v3=[4.593]v_3 = \begin{bmatrix} -4.5 \\ 9 \\ -3 \end{bmatrix}.

The Kernel of ff is the set of points that ff maps to 00. In this case, it forms a line.
There’s a term for this set - the Kernel. We call it the Kernel because this set of vectors maps to the origin, or the core, of the output space.

In our specific case, the Kernel is a line. When ff maps this line to the point 00, we lose information about the line - in the output space, the points on the line are no longer distinguishable.

Returning to our camera analogy, this is similar to how all points on the horizon are no longer distinguishable after the conversion from 3D to 2D. Thus, you can think of the Kernel as a quick way to see how the function compresses or loses information.

Terminology Aside

Let’s get some more quick terminology out of the way before proceeding. We’re going to use the following terms:

  1. Image - the set of outputs of the function (i.e. everything in f(x)f(x)). The image of a point xx is just f(x)f(x).

  2. Pre-Image - the set of inputs for the function (i.e. the xx in f(x)f(x)). The pre-image of a point yy is just f1(y)f^{-1}(y).

In the above example, the function ff maps the oval (Pre-Image) on the left to the point (Image) on the right.

Translations of the Kernel - Mapping to [11]\begin{bmatrix} 1 \\ 1 \end{bmatrix}

We found the set of points that map to [00]\begin{bmatrix} 0 \\ 0 \end{bmatrix} (i.e. the pre-image of the origin). We call this set the Kernel or KK for short.

Can we now similarly find the set of points that map to [11]\begin{bmatrix} 1 \\ 1 \end{bmatrix}?

We're going to do this to show something really cool:
  • Once you know the pre-image of [00]\begin{bmatrix} 0 \\ 0 \end{bmatrix}, it's super simple to find the pre-image of [11]\begin{bmatrix} 1 \\ 1 \end{bmatrix} or any other point for that matter.

Finding the pre-image

Let’s start by finding the points that maps to [11]\begin{bmatrix} 1 \\ 1 \end{bmatrix} as before.

Fv=[11][210013][xyz]=[11]\begin{aligned} F v &= \begin{bmatrix} 1 \\ 1 \end{bmatrix} \\ \begin{bmatrix} 2 & 1 & 0 \\ 0 & 1 & 3 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} &= \begin{bmatrix} 1 \\ 1 \end{bmatrix} \end{aligned}

Solving for each variable, we find that this is just the line defined by:

x=1t2y=tz=1t3\begin{aligned} x &= \frac{1-t}{2} \\ y &= t \\ z &= \frac{1-t}{3} \end{aligned}

for some tRt \in R.

f1([11])f^{-1}(\begin{bmatrix}1 \\ 1 \end{bmatrix}) is the set of points that ff maps to [11]\begin{bmatrix} 1 \\ 1 \end{bmatrix}. This is also a line. Notice how similar it is to the line for KK.

Some valid point are:

v=[010]v = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, v=[372]v = \begin{bmatrix} -3 \\ 7 \\ -2 \end{bmatrix}.

This line looks awfully similar to the line for KK doesn’t it?

Let’s see them both on the same graph. Notice that they’re parallel to each other!

Notice that f1([11])f^{-1}(\begin{bmatrix}1 \\ 1\end{bmatrix}) is parallel to KK. In other words, f1([11])f^{-1}(\begin{bmatrix}1 \\ 1\end{bmatrix}) is just KK shifted over.

Translating the Kernel

So what’s the relation between the two lines we plotted above - f1([11])f^{-1}(\begin{bmatrix}1 \\ 1\end{bmatrix}) and K=f1([00])K = f^{-1}(\begin{bmatrix} 0 \\ 0 \end{bmatrix})?

f1([11])f^{-1}(\begin{bmatrix} 1 \\ 1 \end{bmatrix}) is just a translation of KK.

It is a translation by any vector vf1([11])v \in f^{-1}(\begin{bmatrix}1 \\ 1\end{bmatrix}).

Adding KK to vv gives us the full pre-image, f1([11])f^{-1}(\begin{bmatrix} 1 \\ 1 \end{bmatrix}).

Or said another way,

f1([11])=v+K, for any vf1([11])\begin{aligned} f^{-1}(\begin{bmatrix}1 \\ 1\end{bmatrix}) = v + K & \text{, for any $v \in f^{-1}(\begin{bmatrix}1 \\ 1\end{bmatrix})$} \\ \end{aligned}

This seems kind of too good to be true. Is it? Let’s test it out!

  1. Let’s take a point kKk \in K. For instance, k=[362]k = \begin{bmatrix} -3 \\ 6 \\ 2 \end{bmatrix}.

  2. Let’s take a vf1([11])v \in f^{-1}(\begin{bmatrix} 1 \\ 1 \end{bmatrix}) like v=[010]v = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}.

What’s f(k+v)f(k + v)?

f(k+v)=f([362]+[010])=f([372])=F[372]=[210013][372]=[11]\begin{aligned} f(k + v) &= f(\begin{bmatrix} -3 \\ 6 \\ 2 \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}) \\ &= f(\begin{bmatrix} -3 \\ 7 \\ 2 \end{bmatrix}) \\ &= F \begin{bmatrix} -3 \\ 7 \\ 2 \end{bmatrix} \\ &= \begin{bmatrix} 2 & 1 & 0 \\ 0 & 1 & 3 \end{bmatrix} \begin{bmatrix} -3 \\ 7 \\ 2 \end{bmatrix} \\ &= \begin{bmatrix} 1 \\ 1 \end{bmatrix} \end{aligned}

So it is indeed the case here that f(k+v)f(k+v) is [11]\begin{bmatrix}1 \\1 \end{bmatrix}!

All Translations of the Kernel are Pre-Images

Ok there’s something kind of mind blowing going on here:

  1. We took one point in f1([11])f^{-1}(\begin{bmatrix} 1 \\ 1 \end{bmatrix}).
  2. We added KK to it.
  3. And suddenly we got ALL of f1([11])f^{-1}(\begin{bmatrix} 1 \\ 1 \end{bmatrix})!

In fact this is true more generally!

If you give me ANY point that maps to some point f(v)f(v), say vv, then I can find ALL the points that map to f(v)f(v) by just adding v+Kv+K.

Breaking Down Why

Let’s break the above statement down into two parts.

  1. First, we’re saying that given some vv, all points in the set v+Kv+K will map to the same place as vv (i.e. f(v)=f(v+K)f(v) = f(v+K)).
  2. Next, these are ALL the points that map to f(v)f(v). Or, every point that maps to f(v)f(v) must be in the set v+Kv+K.
The first statement (left hand side) simply states that all points on the line v+Kv+K must map to f(v)f(v). The second statement says that if a point maps to f(v)f(v), like AA, and BB above, then they must also fall on the line v+Kv+K.

Let’s prove each of the above statements more formally, starting with the first.

1. all points in the set v+Kv+K will map to the same place as vv

A more formal way of saying this is:

f(v+K)=f(v), for any vR3f(v+K) = f(v) \text{, for any } v \in R^3

Let’s break down why this is true. Take any kKk \in K (in the kernel). Then,

f(v+k)=f(v)+f(k)Since  is a linear functionff(v+k)=f(v)+0Since kKf(v+k)=f(v)\begin{aligned} f(v+k) &= f(v) + f(k) & \text{Since $f$ is a linear function} \\ f(v+k) &= f(v) + 0 & \text{Since } k \in K\\ f(v+k) &= f(v) \end{aligned}

The below video shows this visually.

By decomposing v+kv+k into vv, and kk, we see that f(v+k)=f(v)f(v+k) = f(v).

Additionally, given this is true for some v+kv+k, this is true for all points on the line v+Kv+K. The reason is that the different amounts of KK all contribute nothing different and it’s only the value of vv that matters to ff. This is shown below:

Any point in KK, such as kk, kk', and kk'', does not change the result of ff. Hence, f(v+K)=f(v)f(v+K) = f(v).

Let’s now move to the next statement.

2. Every point that maps to f(v)f(v) must be in the set v+Kv+K

Essentially, this is saying that there can be no point ww such that ww maps to f(v)f(v) but is not in v+Kv+K.

Let’s prove this.

Choose any v,wv, w such that f(v)=vf(v) = v' and f(w)=vf(w) = v'. We wish to show that wv+Kw \in v+K.

  1. Let w=wvw' = w - v.
  2. Then f(w)=f(wv)=f(w)f(v)=0f(w') = f(w - v) = f(w) - f(v) = 0.
  3. Hence wKw' \in K (as all points that map to 00 are in KK).
  4. Thus, v+wv+Kv + w' \in v +K.
  5. Since v+w=v+wv=wv+w' = v + w - v = w, wv+Kw \in v +K.

So we’ve successfully proved our two points!

The Relation Between Translations of KK and Points in the Image

We’ve already seen something really cool - every translation of KK is the full pre-image of a point in ff.

Now is there any relation between how far apart two translations of KK are (say v+Kv+K and w+Kw+K) and how far apart their images are (f(v)f(v), f(w)f(w))?


If v+Kv+K and w+Kw+K are vv' apart, their images will be f(v)f(v') apart!
If v+Kv+K and w+Kw+K are vv' apart, their images will be f(v)f(v') apart.

Why is this the case? It again follows pretty simply:

w+K=v+K+v+KSince  and  are  apartw+Kv+Kvw+K=v+v+K f(w+K)=f(v+v+K)apply ff(w)=f(v+v)Since f(w+K)=f(w)f(w)=f(v)+f(v)Hence f(v’) apart\begin{aligned} w+K &= v+K + v'+K & \text{Since $w+K$ and $v+K$ are $v'$ apart} \\ w+K &= v+v' + K & \text{ } \\ f(w+K) &= f(v+v'+K) & \text{apply $f$} \\ f(w) &= f(v+v') & \text{Since $f(w+K) = f(w)$} \\ f(w) &= f(v) + f(v') & \text{Hence f(v') apart} \end{aligned}

Overall Space

Let’s now take a step back and view what’s happening in the overall space.

Every point in the image can be seen as the image of some translation of KK. As we move KK around, we get new points in the image!


We’ve now seen some really cool things that you may not have noticed before:

  1. Every matrix is a linear function and that linear function will have some kernel KK that maps to 00.
  2. All pre-images of output points are just going to be translations of KK.
  3. If vv' is the distance between the translations of KK, f(v)f(v') is the distance between their images.

The last point actually leads us to the first isomorphism theorem of group theory. This broadly states that the relation between the sets of pre-images of a special type of function known as a homomorphism (in our case ff) is the exact same as the relation between the set of output points (we’ll go into this in the next blog post!).

There are many practical uses of this knowledge but I wanted to share it for simpler reasons. Sometimes math is just pretty - it has all these cool properties that fit together so nicely that you can’t help but enjoy seeing them.

For example:

  1. Who would have thought that all the pre-image sets are just translations of each other?

  2. Or that the relation between these pre-image sets mirrors the relation between the points in the image?

I hope you enjoyed getting a taste of some abstract algebra and I’ll see you in the next post!