Using Blender to learn the basics of Adaptive Optics (part II / II)

Introduction

This post is the continuation of a previous post about making an AO system simulation using blender (+friends). In this post we will talk about the implementation of the simulator. For details about what we simulate, please refer to the previous post. As always, everything is on GitHub.

In summary, we use blender together with python, Matlab and a bit of (OSL) Open Shading Language to simulate a simple (although quite didactic) AO system. We simulate the star light, the medium where the light passes through (atmosphere), the wavefront sensor that measures the phase deformation and the deformable mirror that corrects for that deformation. We also compute all the cool math that are used in a real AO system with the data generated by our simulator. It is important to mention that this is real simulator not an emulator. That means that we really simulate the physics of the light diffracting and reflecting off of surfaces. We could “emulate” an AO system by having a pre-computed interaction matrix for some pre-defined objects and show how SPFs, Dm commands, etc would behave. However, that’s not the case. We really simulate the light and compute interaction matrices, DM commands, etc all using “real” raytraced light data.

The whole simulator is a little bit intricate because a lot of problems had to be solved in order to make it possible. To better explain the whole process, the next figure illustrate the “usage” of the system. And the vídeo shows the system in action. Later on we progressively explain how things work internally and how they were done.



Figure 1: Simulator usage

Figure 2: System in action (better viewed in fullscreen)

In the system, we have a simulated point source of light, a deformable mirror and a wavefront sensor. Each one is made up using normal blender objects carefully adapted or picked to work like its real counterpart (to simulate, not emulate).

When we open the blender file, you have all the components of the simulator. In summary, there are several “parts” that constitutes the blender side of the simulator itself:

  • 3D model of the physical parts
    • Light source
    • Wavefront sensor
    • Deformable mirror
  • Some translucent objects to be used as medium where the light passes through
    • Lenses
    • Atmosphere
  • Special material to be used as detector
    • detector.osl
  • Python code for panel, and classes
    • panel.py
    • WFS.py
    • DM.py



Figure 3: Blender components

The system itself, as said in the beginning, is composed by several scripts in Python and Matlab. The system started as a small experiment to learn about AO. Because of that, several parts “grew” in functionality while others remained as they were. In the end, the system itself is kind of a “Frankenstein”, mixing Inter-Process communication with file bases information exchange, among other “gambiarras” (Portuguese word for “informal technical adaptation” 😅). The overview of the technologies and parts involved in the implementation are shown in the following figure. In the future, if time allows, I’ll try to have one big python script to control everything and avoid this “soup” of scripts going back and forth with information.



Figure 4: Diagram of the system indicating its components

Blender is the center of everything. It communicates with a python script via IPC using a terminal piping system (explain why later) called PipingServer. Another python script inside blender sends commands to this PipingServer in order to, for instance, plot some measurement or save data. This PipingServer have commands to save the data as Matlab files that later on will be used to communicate with a Matlab script that will compute the AO Business (interaction matrix, command matrix, etc…). Matlab then saves data that will later on be read back by blender to use in the deformable mirror.

Referring back to the first figure, we will now go a bit further in details about the sequence that the user must follow to use the system.

  • Open blender: In order to allow blender to interact with the external script, we open it using a unix piping scheme. This will create a stdin connection between blender and the PipingServer.
  • Calibrate the system: We run the panel.py script inside the blender file to execute the GUI part of the system. There, there are several buttons to control the AO system. The first thing we should do is to mark “calibrate” check box and click “compute”.
  • assemble environment: The user now can place any object in the light path. There is a small square close to the WSF and the DM where the user can put, say, a lens or any transparent object.
  • Compute simulation: Once the system is calibrated, we can start to use it. Now, the button “compute” will tell the system to take a snapshot of the “star” and plot (in the external python window) the data related to the measurement. It will plots things like WSF “image”, PSF, slopes, etc. Anything in the light path will “distort” the light, generating a pattern of wavefront phase in the WFS. Next, the user can use the external plot window to see the PSF and even to have a simulated image of how this element in the light path is distorting images that are far away form the system (see NIRPS Logo in the next figures).
  • Open Matlab: Once a measurement is done, data will be automatically be saved and some Matlab files are going to be placed in the /tmp folder. In Matlab, there will be a script that will read those Matlab files and process them.
  • Process the data in Matlab: When the user runs the Matlab script, it will gather the data produced by the measurement done in blender and compute all the AO business. It assembled an interaction matrix, apply it to the recently read slopes and generates a DM command vector. Everything is illustrated by tons of plots describing the process and the data used. Matlab then saves the commands as a text file in the /tmp folder.
  • Go back to blender and read the DM commands: Now that Matlab computed the DM commands, the user loads them by clicking “Load DM commands” button. This will tell blender to read the saved text file and literally deforms the DM element with the shape computed by Matlab.
  • Compute new simulation:
  • With the mirror deformed, we can click again in the “compute” button (keeping the object in the light path). The result is, if everything goes well, an “undeformed” image in the plots.

  • Appreciate the result: Everything that happens is completely physically simulated (see raytrace part later). There is no pre-defined shape related to a pre-defined set of slopes or deformations of the mirror. ANYTHING translucent that we put in the light path will realistically refract the rays and that refraction will be measured in the WSF component. Also, deforming the mirror will reflect the ray to another direction that also will be captured by the WFS. Hence, when Matlab compute the mirror commands, its computing the exact shape that deforms the rays in the exact way so as to undo whatever deformation happened by the refractions caused by the object put in the light path. Moreover, since we are in a 3D modelling software, one can put ANY shape in the light path and correct for it as long as its in the range of the WSF. I cannot think a better way to learn and get a sense of the subject of adaptive optics!

The system

This simulator is far from being a product. There are professional simulators available that can be used professionally. However, as a leaning tool, it has being proven (at least to me) very useful. The official and professional softwares uses a technique called raytrace to simulate the light and do all this magic that I just described. The central point is very very simple: Shoot a “ray of light” at an angle \alpha from a source in the direction of a WFS and measure its arriving angle. If it is the same \alpha, there is no phase shift in the light. That means that there was no refraction in the path of ray (or there was some that was corrected). Although this is a very simple principle, it is very very powerful when we apply to millions of light rays.

So, to perform all this “mathemagic” of AO simulation, we would need to implement a raytrace. Although it would be a small one, we would need to deal with all the 3D business of line interceptions, 3D shaped object representation, material refraction computations, etc, etc, etc. In this project we exploited one of my favorite software: blender. Blender is a 3D rendering software and it has a very powerful raytracer called Cycles. Moreover, it is monstrously customisable via python internal API. Basically, all the functions that we do manually in the software have an exposed python command that can be called via script internally in blender. It is possible even to implement GUI with buttons, panels. etc.

Using this internal python API, we can make a panel for the user to interact with. Buttons like “Compute”, “Load DM commands”, etc. Together with that, we can use the whole 3D apparatus to build the environment. We can put a mirror, make lenses, etc. When I say “make lenses” I don’t mean to code the math of light refraction based on the internal radius and bla bla bla. I mean literally “make a lens”. Place two goals spheres with some intersection and ask blender to subtract one form the other. Thats it. its really really a “mechanical” process. All the business of refraction, angles, radius, etc, etc will be dealt with by the raytracer! This is exactly what you will find in the blender file used in this simulator. Coming back to figure 3, you will see exactly that. The next figure illustrates the idea for better understanding:



Figure 5: Raytrace diagram used in the blender file

As you see in the figure, we have a source of rays (triangle in the right) that shoots rays against the mirror (tilted square in the left). The rays reflect down and passes through the medium (translucent object). The medium refracts the light ray before reaching the WFS. The WFS it very tricky to implement but is basically composed by a set of micro lenses (lenslets I think is the technical name) and a set of detectors.

Now, in summary, when we press the “Compute” button, blender generates tens of thousands of light rays using Cycles. Each one finds its way to the detector reflecting off of the deformable mirror and refracting when passing through the medium object. The micro lenses (which are also regular objects made in blender) focus the light in the center of the detectors in order to simulate the behaviour of the WFS. The light that arrives in the detector is somehow processed and sent to Matlab that uses it to compute the exact deformation needed to be applied at the mirror and undo the refractions that the light suffered when passed through the medium. Then, the user loads the commands which literally deforms the mirror, performing the correction.

The challenges involved

Up to this point, I assume you understood what the simulation is doing. Now is time to know how it does it. As stated before, a lot of “tricks” had to be thought out in order to overcome all the challenges. In the following sections we will detail each one.

Artificial Star

The first trick was to get blender to do the raytrace job for us. Normally we would think of placing a light source as the simulated “star” so that “rays of light” will go from the light source to the detectors. However, this is not how normal raytracers work. A raytracer (like Cycles) works in the opposite way, shooting rays from the camera to the objects. Because of that, our light source will actually be the camera! When we tell blender to render the scene, the camera will shoot rays that will reflect off of the mirror and end up in the detector. So, in simple terms, our “star” is a blender camera object.

This simple decision of making the camera be our “light source” gives us, for free, a very powerful raytracer! Now we can put objects that refracts light and use the whole palette of materials available in blender. Moreover, our deformable mirror now is simply a plane with a highly glossy material and lots of subdivisions (vertex) that can be moved up and down to deform its surface.

Deformable Mirror

This is the second implementation Trick. As said before, it is composed by a simple plane with several subdivisions. Normally the DMs in real AO systems are not too “subdivided”. Although there are mirrors with 100×100 actuators, in general we are looking for something around 16×16 actuators with a circular mask (only actuators inside the circle can actually move). The naive way to think of representing such mirror is to use a plane with 16×16 divisions and each vertex be an actuator that can be pushed up or down. Unfortunately this would lead to a unrealistic mirror since the pushed vertex would form a “small pyramid” with a lot of discontinuities, scattering the light in only 4 directions (one for each side of the pyramid). To overcome this problem, we make the plane with hundreds of vertices. Now, each “actuator” will be represented by tens of vertices. Moreover, when we “push” an actuator, we make the vertices related to that actuator be shaped as a Gaussian centred in actuator position. This allows a smooth deformation for each actuator simulating a real deformable mirror.



Figure 5: DM command smoothing

To wrap all this DM functionality we have a class at DM.py that implements stuff like the pushing and pulling of the actuators, loading of the commands, etc. Also, it assembles a DM from a config json file that specify how many actuators and the circular mask that have to be applied.

Wavefront Sensor

This is the trickiest component of the system and it is the heart of the simulator. It is also the most interesting one to implement because, as the DM, it works by optically simulating the device. The wavefront sensor simulated here is a Shack–Hartmann wavefront sensor and I already talked about it here. It works by sampling the light spatially using a set of tiny lenses and a big CCD that captures each spatial sample. The details are in the link I just mentioned.

In blender, the WFS component is exactly what a Shack–Hartmann physically is. A set of small lenses forming a circular area where the light is “sampled”. Each lens focus
its light rays at a small “detector”. In our case, the detectors are tiny planes (squeezed cubes) that lies bellow each micro lens on its focus position. In a physical real device, theses detectors are simply a square region on a larger CCD (or any other imaging sensor). There lies the tricky part. Up to focusing the light at the sampled spots, blender was doing all the work raytracing the lithe rays all the way to each tiny detector. Now we have to figure out how to “capture” those light rays that hits the detector. In particular, we need to know where the light rays hit each detector (the x,y coordinate of the hit).



Figure 6: Wavefront sensor component in blender

To have an idea what the light path should be, the next figure shows what we would see if we were positioned at the camera. One important point to be mentioned is about the render configuration used in this blender file. As mentioned before, we use Cycles as renderer. Cycles have several parameters that can be configured to produce the appropriate render of an image. Since we don’t use the image that is produced, we basically care about tree of those parameters. The first one is the size of the “image” that would be rendered. Although we don’t care about the produced image, we care about how many rays will be produced. The size of that image influence exactly in how many rays Cycle will generate, hence influencing the precision of our measurement. That leads us to the second important parameter which is the number of rays per pixel. This parameter is controlled by the “samples” Cycles parameter. The third is actually a “set” of parameters and are called “bounces”. They control now many times a light ray can bounce off of different materials. To be able to focus the light on the detector material and simulate refractions, we need to keep the bounces at least 4 or 5 for most of the material properties (reflection, transparency, etc).

Figure 7: Render of the blender scene from the camera view. Left: Render with a clean light path and no object placed. Right: Same render, but this time with an object placed in the light path.

Now comes the difficult part. The python blender API do not allow us to interact at low level with the render because the rendering have to be SUPER-HYPER fast (and even be allowed to run on the GPU). So, how would we know where the light ray hit the surface of the detectors? Fortunately, blender have a special material where we can code our own “shader” in a special set of C commands called OSL or Open Shading Language. OSL allows us to code, using a C like command language, the way the material threat the light that arrives. For example, if we want an object to appear completely black, we code an OSL function for the material that returns black regardless of where the light hit, intensity of the light, angle, etc. OSL is super cool because you can code crazy behaviour for the material. For instance we can make the material be transparent depending on the angle that the light hits or reflect different colours depending on the place and direction of the light ray (simulating anisotropy). Anyways, we don’t want to create a crazy material. However, using an OSL function as the material for our detector, we have access to the light hit including its position on the object, angle, etc. Up to this point we think that the problem is solved! But unfortunately there is a HUGE drawback. How to communicate this light hit position to the system? Remember that OSL have to be super fast. Because of that, they are compiled BEFORE the rendering starts and they run natively. So, no way to have sophisticated stuff like sockets, memory access, IPC, etc… Everything that happens in the OLS, stays in the OLS…

The solution for this problem was another “gambiarra” which means a set of trickeries involving a lot of different things that usually are NOT meant to be used for that purpose. The first thing was to realise that somehow it should be possible to debug an OSL code. It turns out that you can enable a special debug mode by setting the environment variable OSL_OPTIONS="optimize=0". This will allow us to use a special "printf" inside the OSL to print stuff to the console. Victory!!! There is our loophole! Now, our OSL function is absolutely trivial. It just prints out the position of the light ray out to the console.

#include <stdosl.h>

 
shader detectorShader(
        output color ColOut = color(0.8))
{
    printf("NIRPS %.6f, %.6f, %.6f\n",P[0],P[1],P[2]);
}

Now, every time we render the screen, the console is flooded with xyz coordinates of light hits. All we have to do is to capture them in the middle of all other console messages. To do that, we start blender using a console pipe to redirect the output as input to another external program that will receive all the prints from blender console and process them. This external program is the python script that we call pipingServer. Now that we have this “interpreter” that receives all blender console, we can send commands to it just by printing it either using python inside blender or using the OSL material we just talked about. Moreover, we have to filter out the normal blender console messages. This is simple enough just by adding some keyword before the prints to identify that we have a string to be interpreted as a command. In our case we prefix all the commands and data with “NIRPS” (which was the name of the project I was working on).

Hence, we need to start blender with the pipe and the filter for the prefix we defined. For convenience, we use a shell script that provide all the typing

#!/bin/bash

PYTHONUNBUFFERED=on

OSL_OPTIONS="optimize=0"

blender main.blend | grep --line-buffered "NIRPS" | ./pipingServer.py

Notice that there are one more configuration we have to make. Either python and grep will try to “buffer” the output / input for efficiency. In our case, this is terrible because we want the command to be processes immediately. This is simple to solve by adding --line-buffered to grep and setting python’s environment variable PYTHONUNBUFFERED=on.

By doing all that, it is possible not only to gather data form the wavefront sensor, but to have a way to send commands to the external python script where we can plot and compute stuff.

Python module and external plots

As mentioned before, we have this pipingServer that receives commands and data from blender. This a straightforward infinite loop that reads an input and processes it. The main loop looks like the following

while "forever":
    # get a line message form blender output
    text = input()

    if text[0:5]=='NIRPS':
        # if message starts with NIRPS, its ours
        text = text[6:]

        if text[0:4] == 'SAVE':
            # just save data without processing
            doSave(x, y, cx, cy, calCx, calCy, filename)

        elif text[0:4] == 'FILE':
            # set filename to save
            filename = text[5:]


        elif text[0:5] == 'PRINT':
            doPrint(x, y, LIMx, LIMy, cx, cy, calCx, calCy)

        elif text[0:4] == 'PROC':
            # Save data to Matlab
            doSaveData()

            # process light hits to find center
            doProcess(x, y, cx, cy, N, data, LIMx, LIMy)

            # save centers 
            doSaveCxCy()

        elif text[0:3] == 'CAL':
            # perform calibration
            doCalibrate(calCx, calCy, cx, cy, LIMx, LIMy)

        else:
            # if starts with NIRPS but its just numbers, append to data (they are light hits)
            data.append(list(map(float, text.split(','))))

    else:
        # ignore blender normal console output
        print('[ignored]: '+text)

On this script, no sophisticated math trick is done. It just prepares the RAW data to be used in another script (this one in Matlab) that does the heavy duty math. One of the important commands is the CALIB command. It takes the RAW light hits in the form of xyz coordinates and computes the centroids that will be used as “neutral” sub apertures center. These are the centers that the light rays hit if there is no object in the light path. The interesting thing is that we can calibrate with anything in light path and the system still works (because of the calibration step) as long as the object remains in the same position after the calibration.

Another thing that this script do which is very important is the RAW plots. One of the commands, the PROC command, calls a function that performs a set of very informative plots.



Figure 8: Example of the python plot window

  • Light hits: The first one is a scatter plot of the RAW light hits. It shows where the light rays arrived in the 2D surface of the detectors.
  • Wavefront slopes: The second one is slopes measured from the current calibrated centers to the last light hits. These indicate the phase distortion that the light suffered when passing through the material in the light path.
  • Wavefront magnitude: This one is just the magnitude of each slope in the second plot. It serves only to visually indicate how bad is the distortion suffered by the light. However, this plot might be misleading since we ca have all slopes with constant magnitude but totally crazy slopes angles.
  • Phase image: This one is a but trick to explain… This is actually the phase of the light that arrives at the detector. Not to be confused with phase slope. The phase is literally the displace in time the light (measured in phase angle) that the local light wave is taking to arrive at the detector. One might think: How the HELL do you measure the “time” that the light wave takes to arrive at the detector???? If you are wondering that, you:

    1 – Have a good grasp of the physics involved. You have this good grasp because you need to understand well about physics to realise that this is something almost impossible to measure easily. Moreover, what do I mean by “wave”??? Wasn’t we simulating the light “RAYS”? To simulate light at the wave level requires us to go much much deeper in the rabbit hole! (if you want to go deeper in the rabbit hole, I have a post about simulating light at the wave level 😅.

    2 – You didn’t read the first part of this post and/or didn’t read the post about wavefront sensing 😅.
  • Point Spread Function: This is the PSF of the last measurement (system + object in the light path). In another words (if you didn’t read the last post %-) its the point of light that we would see if we were in the focus of the system in the position of the WFS. This is not made by discrete light rays, but its the reproduction of the waves of light (yes waves) that hit the detector!
  • Image: Finally, this is the image that you would see if we were looking at a NIRPS logo far far away. You see it blurry and distorted because, in this example, there is a random atmosphere in the light path.

Matlab module

This part of the system is implemented in Matlab and does the more specific AO math business like compute interaction matrix, decompositions, Zernike, etc. It starts by reading the the files (data) written by the pipingServer. After that, the first thing we have to do is to compute the interaction matrix. The data need to do that is more than just a calibration data as mentioned before. As explained in the previous post, we need to pinch the DM in several positions to gather enough data to build the interaction matrix, command matrix, command/slope matrix etc. In the blender interface, there is a command that makes blender do all those measurements. This button is called “Command Matrix”. When we click this button, blender produces a pinch of a certain amplitude in each actuator at a time and measure the slope corresponding to each actuator pinch. For each actuator it saves a file in the /tmp folder called DM_i_j.m where i,j is the index of the actuator. Those are the files that Matlab uses to gather data to assemble the interaction matrix. One important thing to notice is that this is the same procedure that a real system does to calibrate the DM/WFS. Again, this is awesome because it means that our simulator does not use any math trick to “emulate” the behaviour or a AO system. Instead, it really simulates the real process! The following figure shows a small animation of the DM pinch calibration.



Figure 9: Calibration of the DM.

After calibrating the interaction matrix, the next step is to read the last measurement (normally with an object in the light path). This will be the measurement that will be corrected by the DM deformation. Hence, the next step is exactly to compute the DM commands that corrects for the last measurement. After computing the mirror commands, they are saved in a text file and is ready to be read back in blender.

With the commands computed and saved, we can go back to blender and load the mirror commands. The commands will be applied to the mirror by displacing the actuators in the same direction that they were pinched during the calibration (perpendicular to the mirror surface). Since the displacement are really small, in order to visualise the commands, a colormap is applied to the mirror to indicate how much each area of the mirror is deformed. After loading the commands, we can perform a new measurement. Now the measurement will take into account the deformation of the mirror and the deformation due to the object together. If everything goes well, one should cancel the other and we should see a sharp PSF in the plot window as well a 0 slopes like it was just after calibration. In other words, the mirror should compensate for the deformation of the light due to the object in the light path.

Moreover, the Matlab script also computes a lot of side information used during the process of computing the mirror commands. All of that data are ploted in several windows. We can see the original slopes vs the mirror correction, interaction matrix, command matrix, etc. Another piece of information that is ploted are the modes of the interaction matrix. It is possible to see the zonal as well as the KL modes and its corresponding slopes. The next figure illustrate some of those plots:


Figure 10: Plots resulting of the Matlab computation of the DM commands. DM commands, Interaction matrix, KL modes and slopes, Zonal modes and slopes.

Results

To illustrate some results, here we have two examples of the performance of the system. In each one, we put a different object in the light path. Then, we proceed with the computation of the DM commands that corrects for the deformation produced by that object. In the end we compare the PSF before and after the correction. In both samples a sequence of pictures are presented that are, up to this point, self explanatory.

The flat calibration

As mentioned in the previous sections, before doing any measurement, we need to calibrate the system. The calibration is done with the light path empty as shown in the following figure:


Figure 11: Light path during calibration. The orange square is the region where we place objects. For the calibration it must be empty.

As a result, we have perfectly flat slopes and the PSF is perfectly sharp (except for the diffraction limit as explained in the previous post).


Figure 12: Plots result for the calibration. Everything looks flat and the PSF is diffraction limited.

Simple geometry example

The first result is a toy example that uses a very regular shaped object. The goal is to see how the system corrects for extreme distortions.

Figure 13: Results for a basic transparent cone. From top to bottom: 1 – Object in the light path with flat DM. 2 – Measurement plots for the object. 3 – Computed commands for the DM. 4 – Object in the light path and DM with loaded commands. 5 – New measurement showing the correction.

Atmosphere simulation

In this second example we try to simulate a fake atmosphere by using a “irregular and noisy” object. We make this object “elongated” on purpose (see next section bellow) so we have different atmospheres just by moving the object in front of the light path.

Figure 14: Results for a simulated atmosphere. From top to bottom: 1 – Object in the light path with flat DM. 2 – Measurement plots for the object. 3 – Computed commands for the DM. 4 – Object in the light path and DM with loaded commands. 5 – New measurement showing the correction.

Extra bit

Just for fun, we implemented a special loop in blender that moves the atmosphere object by a small amount each time. Each time, it performs a measurement and saves the resulting plot. By doing that for several iterations, we end up with a small animation of a simulated atmosphere in action. So, by looking at the PSF we see the same effect we have when we look at a star in the sky. You can see this animation in the next video.

Figure 15: Atmosphere simulation using the system

Conclusions

This was a really fun project. I had to learn a lot of stuff and apply math concepts in a very practical way. As I mentioned in the previous post, I didn’t really use this simulator for anything serious, but it was very useful for me to understand the concepts I was immerse on. As always, that you for reading up to here. The code and files are on GitHub and if you have any question, critic or comment, please leave it on the comment section bellow. I would love to talk about it!

0

Using Blender to learn the basics of Adaptive Optics (part I / II)

Introduction

In this post I’ll present another side project that I “had to do” during my participation in a project in Switzerland (as part of a pos-doc). I say “had to do” in quotes because, although was not one of my official attributions, it helped me A LOT to understand one of the main parts of the project and be able to develop the part I was responsible for.

Briefly, the project is about the construction of a near infra-red spectrograph to be installed in a telescope in Chile. One of the novelties of this spectrograph is that it has an Adaptive Optics system to stabilise the images and optimize the performance of the instrument (details about the project in a future post). Hence, this post is about Adaptive Optics and more specifically, the small toy simulation I did in blender (together with C, python and Matlab) to understand how things work. In this post you will see no “blender” nor any implementation yet. Instead, I’ll talk about the theory behind it. In the next post I’ll talk about the implementation of the simulator itself.

Adaptive optics (just a tiny little bit)

general idea

Although the math and technology behind AO is extremely awesome and intricate, the basic idea of it is quite simple and the goal is straightforward. In short, the goal is to make the stars stop twinkle. The stars twinkle because of our atmosphere (see this post Wavefront sensing). In simple words, the atmosphere distorts the light that passes thought it randomly and thousands of times per second. That is what make its image “twinkle”. So, to simple put it, to make the stars stop twinkle, we have to un-distort the image of the star before it reaches the instrument (camera, spectrograph, etc). The way they do it is by deforming a flexible mirror with the “negative” of the distortion the atmosphere produces.

So, as you can see, the idea is straightforward. However, the amount of science and technology involved in doing it is breathtaking! First, we need to “measure” what is the deformation that the atmosphere is applying. Then, we have to apply that deformation (the negative of it) to a deformable mirror. Moreover, we have to do that thousands of times per second!!!
The animation bellow, tries to illustrate the process in a very very simplistic way. The first straight lines are the light coming from a star. The blue region is the atmosphere. When the lines passes through the atmosphere they start to “wiggle” to represent the distortion that the atmosphere introduces. The light bounces first into the deformable mirror (which is initially flat). Then it goes to a “splitter” that sends part of the light to the “sensor” that measures the deformation. The other part goes to a camera or instrument. The “sensor” sends the measurement to a computer that process it and computes the commands necessary to the deformable mirror. The deformable mirror then deforms and the light, from now on, becomes “straight”. Notice that, in the camera, the image of the star is “twinkling” at first. After the correction, the image becomes static and sharp.

Figure : Animation of an adaptive optics system

Thats the goal of an adaptive optics system. From that goal, comes a LOT of questions that have to be considered:

  • How to measure the deformation?
  • How to correct for that measured deformation?
  • How this correction can be computed?
  • Is this measurement noisy or precise? How to filter?
  • How to correct for the optics inside the instrument?
  • What kind of mathematical tools are needed?
  • and many more…

The goal of this project was exactly to try to answer/understand those questions.

The wavefront sensor

The wavefront sensor is the device that magically measures the deformation that the atmosphere is applying to the light that comes from the star. More impressively, its a camera (a special one, but an image device, nevertheless). That means that just by the light that arrives at the telescope, it can measure the deformation that was produced. Since this post is not about the device itself, for now its important to only understand what this measurement produces. For details on how a certain kind of wavefront sensor works, see Wavefront sensing)

There are several types of wavefront sensors. The one we have in this simulation is called “Shack Hartmann wavefront sensor”. It works by imaging several “copies” of a point source light (the star) in different spacial positions in an CCD called “sub-apertures”. Depending on the spacial precision that we want to measure, we have more or less of those sub-apertures that produces the copies of the point light. By measuring the centroids of each sub image, it produces a “vector field” (called slopes) that corresponds to the direction and amplitude of the deformation at each place in space. The following figures shows a real measurement.

Figure : First : Real image of a wavefront sensor (raw image, and marked sub-apertures marked). Second: Vector field resulting from a wavefront sensor measurement

The deformable mirror

As the name implies, this device is a mirror that has a deformable surface. As said before, the goal is to deform the light that comes from the atmosphere with the “negative” of its atmosphere deformation in order to make the effect of the atmosphere itself disappear.
The surface of this mirror is composed of several tiny actuators that pushes or pulls a point in the surface in order to deform it. As the wavefront sensor, depending on the precision we want to deform the light we can have from tenths to tenths of thousands actuators. Also, there are several types of actuators from piezoelectric to magnetic micro pistons.


Figure : Sample of DM commands. Each “pixel” in this image represents the amplitude of an actuator. This image is what is sent to the mirror and its called “DM commands”.

Calibration (the Math behind it all)

In the following sections we will talk about the math behind an AO system. I’ll give the view of a non-AO engineer. In other words, I’ll not use the exact terms used in the area of adaptive optics. Instead, I’ll try to always see the AO processes (calibration, measurements, etc) as pure linear algebra operations. That is one of the beauties of using math to do physics 🙃.

The core of the math behind an AO system deals with two basic entities: the wavefront measurement and the commands applied to the DM. They form pair “input vs output” in the AO system. The main goal is to “map” the mirror commands \mathbf{c} to wavefront slopes \mathbf{s} in the sensor.
In practice, either the mirror and the wavefront slopes have a geometric “shape”. Because of that, we have to map the physical place of each actuator / slope. To do that, we define a “mask” for the mirror and for the WFS which contain the position of each element in the command vector

    \[\mathbf{c} = [c_1, c_2, ..., c_n]^t\]

and in the slopes vector

    \[\mathbf{s} = [s_x_1, s_x_2, ..., s_x_m, s_y_1, s_y_2, ..., s_y_m]^t\]

The figure bellow shows two examples of masks and the position os the elements of the command vector




Figure: Example of masks defining the physical position of each command/slope in the actual devices

Using the laws of optics, in theory is possible to come up with the effect that the mirror will have on the light rays until reach the sensor and have a mapping \mathbf{s} = f(\mathbf{c}). However, this function f(.) is really complicated in general. It turns out that the slopes generated by the atmosphere are not so intense and a first order approximation for f(.) is, in general, good enough. In fact, finding out this relationship is what they call “calibration”.

Interaction matrix

In principle (and theoretically) the function f(.) should behave as a discrete differentiation. In other words, a plane wave driving to the mirror should take the form of the commands and the WFS would measure the local slopes of the wavefront. That is exactly what a differentiation would do. However, the differences in size between WFS and the mirror, factors like crosstalk between actuators and etc forces us to “calibrate” and find a suitable mapping f(.) which is exclusive to the pair Mirror / Sensor we have.

The first order approximation for the function f(.) represents a linear mapping between mirror commands and wavefront slopes and can be written as

    \[\mathbf{s} = \mathbf{M}\mathbf{c}\]

where M is called interaction matrix. Interaction matrix is a matrix whose columns corresponds to the slopes on the WFS and each line corresponds to an actuator of the DM. The value is the local light displacement in the WFS subaperture (x for the first half of the columns and y for the second half of the columns).

Here is where the math starts. Now, the whole of adaptive optics boils down to the study, process and understanding of a linear space defined by the relation of the equation above.

To obtain this matrix we have to sample several slopes \mathbf{s} from several commands \mathbf{c}. In theory, we could just randomly command the mirror, measure the slopes and find out \mathbf{M}. However, since we have full control of what we command, we just can push each actuator of the mirror letting all the other in zero position. If we stack side by side each command for an individual actuator, we will end up with an identity matrix for the vectors \mathbf{s} and a set of row measurements for the vectors \mathbf{s}. Hence we have a stack of measurements

    \[\mathbf{S} = [\mathbf{s}^{(1)}, \mathbf{s}^{(2)}, \mathbf{s}^{(3)}, ..., \mathbf{s}^{(N)}]\]

and

    \[\mathbf{C} = [\mathbf{c}^{(1)}, \mathbf{c}^{(2)}, \mathbf{c}^{(3)}, ..., \mathbf{c}^{(N)}]\]

where \mathbf{s}^{(i)} and \mathbf{c}^{(i)} are the measured slopes and command for the sample i. In particular, \mathbf{c}^{(i)} = [0, 0, 0, ..., 0, 1, 0, ..., , 0, 0] making \mathbf{S} = \mathbf{I} (the identity matrix).

Although there are several ways to sample the interaction matrix space, this one is interesting because now the relationship becomes \mathbf{s} = \mathbf{M}\,\math{I} = \mathbf{M}. In other words, the measurements themselves assemble the interaction matrix. Those measurements are done sequentially and the animation bellow illustrates the procedure.


Figure : Animation of the pinch procedure for calibration. In the left we have the slopes and in the right we have their amplitudes.

After all measurements, the interaction matrix is ready to be used in the AO system. Of course that, in practice, some technical and practical aspects are different. For instance, they push and pull the actuators more than one time, or use another set of samples commands, like a Hadamard shaped command. For the purpose of this simulator, the identity was enough. Bellow we have an example of an interaction matrix resulting form this procedure.



Figure : Interaction matrix example

With the interaction matrix at hand, we now can do the optical “analysis”. In other words, we can predict what the slopes \mathbf{s} will be if we deform the mirror with a particular shape using commands \mathbf{c}. But this is not what we want. In fact we want the exact opposite. We want to measure the atmosphere shape and deform the mirror such as to produce the negative of the measured shapes. Since we have a linear relationship, this seems easy enough. We just invert the interaction matrix and write \mathbf{c} as a function of \mathbf{s} right? Not so fast… First of all, \mathbf{M} is not square. That immediately tell us that there are more degrees of freedom in the commands \mathbf{c}. Which mean that for the same slopes \mathbf{s}, there might be an infinite number of commands that have those slopes as a result. Because of that, we have a whole lot of math to help us choose the right commands to apply to the mirror. This is what we will explore in the next sections.

Phase decomposition

Now that we boiled down the whole AO business to a linear relationship, we can do a lot of tricks with it. Probably the most important thing we should do is to analyse the structure of the matrix M. This matrix embeds the whole optics of command vs slopes in its values. For this kind of analysis, it is really helpful to visualise the linear relationship as blocks of vectors as described by the following figure.


Figure : Linear relationship as combinations of columns of \mathbf{M}

Here, we see that the slopes are organised as X and Y in a vector and that the column space of \mathbf{M} spans its space. In other words, the final slopes are a linear combination of some other more basic slopes and the combination is exactly the commands \mathbf{c}.

Now lets then decompose \mathbf{M} in its singular values and write

    \[\mathbf{M} = \mathbf{V}\mathbf{D}\mathbf{U}^t\]

In this decomposition, the column space of \mathbf{M} is rewritten or projected in another basis \mathbf{V} = [\mathbf{v}_{0,1},\mathbf{v}_{0,2},...,\mathbf{v}_{0,n}] which is orthonormal. Each i^{th} column of \mathbf{U} = [\mathbf{u}_{0,1},\mathbf{u}_{0,2},...,\mathbf{v}_{0,m}] together with its corresponding \mathbf{D} = diag(d1,d_2,d_3,...,d_m,0,0,...0) is the coordinate of the i^{th} column of the original matrix \mathbf{M}. This relationship is illustrated in the diagram bellow


Figure : Block representation of the decomposition of \mathbf{M}

With this representation in mind, we can analyse a little bit further what are the projected vectors \mathbf{v}_{0,i} and the vectors \mathbf{u}_{0,j}. By the construction of the decomposition, \mathbf{v}_{0,i} is an orthonormal projected version of the columns of \mathbf{M} which means that they are slopes. Since \mathbf{v}_{0,i} are coordinates (weights for the combination of the slopes \mathbf{v}_{0,i}), they are commands.

Now the question is: What are those “special” set of slopes and commands?

Modes

The answer to the previous question is: Despite the fact that the “special” commands and slopes are orthogonal among themselves, there is nothing optically special about them. They are called “modes”. The figure bellow shows a typical set os modes for commands and slopes from an adaptive optic system.


Figure: Slopes and commands modes. Result of the SVD decomposition of the interaction matrix.

The relationship between each command mode and the slopes mode is quite straightforward. If we compute the slopes \mathbf{s} for any given special command, say \mathbf{u}_{0,i} we have

    \[\mathbf{s} = \mathbf{M}\,\mathbf{u}_{0,i}\]

    \[\mathbf{s} = \mathbf{V}\,\mathbf{D}\,\mathbf{U}\,\mathbf{u}_{0,i}\]

Since all the rows of \mathbf{U} are orthogonal to each other and \mathbf{u}_{0,i} is one of those rows, the last term (\mathbf{U}\,\mathbf{u}_{0,i}) will be basically a vector with zeros everywhere except on the i’th row. When we multiply this vector with \mathbf{D} we basically “select” the d_i and now we have a vector with zeros except for d_i in the i’th row. Multiplying now this vector by \mathbf{V} means that we select the i’th column of \mathbf{V} multiplied by it corresponding d_i. In short, when we apply one of the command modes, we end up with the corresponding slopes mode. By the way, those modes are called “zonal” modes and they are the ones that comes out of the SVD decomposition of the interaction matrix.

Now we can focus on the generic commands we will send to correct for the atmosphere. Since those commands will come from the interaction matrix (its inverse, to be more precise) it will be a linear combination of the basis we got in the SVD decomposition. Since that basis is an orthonormal basis, we can easily decompose back the command into its components in this “zonal modes”.

The question is, why decompose back? We mentioned before that when we use one of the modes as commands, we “select” the corresponding mode in the slopes weighted by the corresponding eigenvalue d_i. If look at the distribution of the eigenvalues, we notice that they decrease in amplitude up to values that are very small (See next figure).



Figure: Distribution of the eigenvalues in the interaction matrix SVD decomposition

if we are solving the inverse problem, we will present a set of slopes and we will produce the commands necessary to generate those slopes. If that particular set of slopes had a mode corresponding to a small eigenvector, we will need a very high amplitude in the respective command to generate it. That will produce a mirror configuration with a crazy shape because remember that all we are doing depend on small displacements for the mirror. But those slopes might occur due to measurement noise, small optical artifacts, etc. So the solution is to decompose the generated commands and “filter” out those crazy modes.
Hence, the procedure is: We decompose, eliminate the crazy ones, and “recompose” back the commands. When we apply this new command, we won’t have the original desired slopes, but a “smoother” version of it which probably will be fine.

KL modes

We can look at the problem of finding the interaction matrix as a “regression” problem. We have a linear relationship that we want to find and have some samples of it. That linear relationship defines a space. When we do the poking of the mirror each actuator at a time, we are “sampling” the space in order to compute that linear relationship. In that sense, the zonal modes are a new basis for this space. However, there are infinitely many basis. In that sense, we could try to find a new basis that might have a statistical significance or interpretation. One in particular that might be useful is the Karhunen–Loève basis or KL for short. The KL basis is obtained by taking the eigenvectors of the covariance matrix of the samples. This is the procedure used in several techniques like principal component analysis (PCA). The idea is that the covariance matrix defines an ellipsoid in the sample space. In that ellipsoid (in several dimensions, or course) each axis is an eigenvector and its length is the associated eigenvalue. The advantage of this new basis is that each eigenvalue represents the variance of the sampling in that particular direction (or mode). Hence, small eigenvalues might mean that, the corresponding eigenvalue / mode, is just “noise” and could be thrown out or ignored.
The next figure shows some of the KL modes, just as an illustration.

Figure: Slopes and commands for the KL modes.

Visually the modes are not so different than the zonal ones. They all have the “humps” and “valleys” which are normal for matrices with the geometry of the interaction matrix. However, if we loop at the distribution of the modes in termos of its corresponding eigenvalues, we see a big difference (see next figure). In the KL version, we have a lot more small eigenvalues, meaning that we can consider reconstructions with less modes having the same overall loss in information.


Figure: Distribution of the eigenvalues in the interaction matrix KL decomposition

Zernike modes

Up to this point, a lot of the math sophistication in an adaptive optics system boils down to choose a good set of modes which, in turn means to select a good basis where we can project the commands onto. Hence, yet another interesting basis that could be used is called Zernike Basis. This basis is based on a special set of polynomials called Zernike Polynomials. It deserves an entire post only for it but, in summary, its a set of ortogonal polynomials defined in the unity circle. Each polynomial in this set is characterised by two numbers (m,n), m > n that defines the degree of the polynomial. The following figure shows a plot for some of those polynomials.


Figure: Plot for Z_{0,0}(x,y), Z_{1,0}(x,y), Z_{1,1}(x,y), Z_{2,1}(x,y), ...

It turns out that those “Zernike modes” are optically meaningful! First they have nothing to do specifically with AO. They are very useful in optics in general. For instance eye doctors refer to them a lot to describe eye defects. The first one even have familiar names to us like Astigmatism, Coma and Defocus.

But the Zernike polynomials are polynomials. To be used as a basis in AO they need to be spatially sampled to be the size of the DM (and hence be used as commands). However, we need to be cautious because after spatially sampled, the resulting discrete set (command to be applied in the DM) is not orthogonal. Nevertheless, the main use of Zernike modes are not to work as a basis per se, but instead to be “applied” to the system and measure the result.

The point Spread Function

When we finally have the right modes, filtering and etc, the light ends up arriving to the camera and/or detector. An interesting question is: What should we expect to see if the system do its job well? We should see the perfect image of the star, right? Well, unfortunately no… First of all, in general, the star is really really tiny and “a perfect image of a star” would be few pixels with a high brightness and all the other pixels with zeros or very low brightness. That would be very good if it happened, but there is a physical (optical) limitation: the aperture of the telescope.

The image that we expect to see, the little “dot” representing the star, is called point spread function or PSF in short. As mentioned in the paragraph above, even if everything is perfect and a completely plane wave reaches the telescope its finite aperture, combined with the “shadow” of the secondary mirror have an effect in the image. It turns out that the image we see is the the inverse Fourier transform of the “image” in the pupil of the telescope. “Image” is in quotation in the previous sentence because it’s not an image per se. Instead it is an image where we have constant amplitude where the light goes through and the phase modulated by the slopes of the light that arrives. If the wave is perfectly plane we have a constant amplitude there the light goes though and a constant phase. In this case, the image that is formed in the detector/camera is not a perfect point (as the object truly is) but instead, a smoothed point with little rings around it. The next figure shows an example.


Figure: Example of a pupil “image” and its corresponding camera image. We see, in the pupil, the area where the light enters and the shadow of the secondary mirror in the middle. In the camera we see the PSF which is not a perfect point.

The explanation for this Fourier relationship between the pupil and the PSF image is, in on itself, a huge topic called Fourier Optics. That deserves a post (or more) by itself. There is another important relationship we gather from the area of Fourier Optics. If the image that we are observing is not a point, let say a planet, we can predict what it will show up in the camera. It turns out that it’s the convolution of the PSF with the original clear image. That is why there is a limit of how clear an image can be imaged even if we had no atmosphere at all!

Now we might ask the question: What happens if the phase that arrive is not constant, but modulated by the atmosphere? To have an idea of what happens, we can take a look at the distortions caused by the modes themselves (in particular the Zernikes since they have optical use). The next figure shows the PSFs for each Zernike. Each one is the result PSF for a system with circular aperture (like an eye or a telescope with no secondary mirror) and phase modulated by the Zernike shape. As mentioned before, each shape have a name in Optics like “Astigmatism”. In fact when a person suffers from Astigmatism, it means that the “lenses” formed by the eye have that Zernike shape! So, to emulate what a person with astigmatism sees, we convolve the astigmatism PSF with the image and voila! You can have an idea what is like to have the problem.


Figure: Set of PSFs for each corresponding Zernike mode

In particular, the 2nd and 3rd Zernike are called “Tip” and “Tilt”. If you look at the corresponding PSFs, we notice that they don’t change shape. Instead, they “displace” the PSF to the right or to the left (Tip) and up or down (Tilt). Hence, they are the responsible for the movement of the star in the camera. Wen we see the start twinkling and moving around, the moving around is the result of the optical mode Tip and Tilt combined together.

NCPA

Another common calibration that is needed in an AO system is called Non-Common Path Aberration (NCPA). It deals with the fact that optical elements in the system might introduce optical distortions or aberrations. One might argue that, since we measure the distortions with the WFS, why to calibrate for the optics of the system? Shouldn’t the calibration of the interaction matrix take that into consideration too? The answer is yes, but for the light path until the WFS! If we look at the first figure with the small animation in the beginning of this post we see that, at some point, the light splits from the DM to the WFS and Camera/Detector. Everything in the path to the camera is not sensed by the WFS, thus not corrected by the DM. So, they are aberrations that are not common to the path of the WFS (hence the name, NCPA).

But how do we correct for those aberrations that are not sensed by the WFS? In another words, how do we measure those aberrations in order do eliminate them? The answer is, we don’t! There is a very clever trick the engineers came up with that involves using the previous calibration (the interaction Matrix) to help. To correct for the the NCPA, we assume that the interaction Matrix is calibrated. That means that we can produce any shape on the DM accurately. Hence, the AO system have a special light source (normally a Laser) that is placed before the mirror and simulates a perfect point source of light. With that light, if the mirror is flat, we should see the PSF corresponding to the shape of the mirror and the optics in the path (in general circular). Due to the NCPA, the image will not be a perfect PSF, but instead, a distorted version of it.

The trick to figure out the distortion is to try to “guess” the distortion. A very very naive was to do that is to search several mirror configurations until one of them makes a perfect PSF that look like a pane wave PSF in the camera (the WFS is already calibrated). Of course that this is nonsense due to the infinite number of possibilities. Fortunately, we don’t need to test randomly. We could use certain configurations where we know what to expect in the image. Those are exactly the Zernike modes.

Hence, the NCPA algorithm is pretty simple. We apply each Zernike mode skipping the first one which is flat and also the Tip and Tilt which does not change the shape of the PSF. We apply them to the DM sequentially scanning the amplitude from, say, -0.5 to 0.5 and observe the image. The one which looks more like the mode we know we store the amplitude we are applying and that would be the current amplitude for that mode that corrects the NCPA. We do that for several modes (say the first 10 or 30). In the end we will have for each Zernike mode, a small amplitude that, together, corrects the NCPA. Unfortunately when we are correcting for one mode we might slightly de-correct for the previous, that’s why we have to do several iterations of this sequence. In a sense, we are solving an optimisation problem by looking in each dimension at a time interactively.

In summary, we have to find a vector \mathbf{a}=\left[a_1, a_2, a_3, ..., a_N\right] of amplitudes for the fist N Zernike modes which, when applied together to the DM, will make a perfectly flat PSF (just the small blob with the rings). The algorithm here will search for the point in the Nth dimensional hyper volume defined by setting limits for each amplitude like -A_i < a_i < A_i<. In the end the command that corrects the NCPA would be

    \[\mathbf{c}_{NCPA} = \sum_{i=1}^{N}a_i\,\mathbf{z}_i\]

where \mathbf{z}_i are the Zernike modes. Fortunately, the “function” we are optimising is convex and can be easily tested by measuring the flux of the brightest pixel in the image. Hence, we are optimising the maximum flux possible in a pixel of the camera image.

The algorithm is shown in the next figure. There, N_i is the number of iterations. N_z is the number of Zernike modes to search. A is the maximum amplitude for the Zernike amplitudes. N_a the number of amplitudes to scan from -A to A.


Figure: NCPA algorithm

One interesting point in the algorithm is that each measurement is not instantaneous, so if we have, say, 4 iterations, for 30 Zernikes we will have to scan 120 times. In order to find a good optimisation we have to have a “fine” scan for each Zernike. The engineers try to narrow down as much as possible the limits in the amplitudes (value of A) but we still have to look for several values in this range. Say that we have A = 0.5 hence, we scan from -0.5 to 0.5. Which step should we use? If we want a fine search we have to subdivide to get a 0.001 precision, we would have to compute 100 steps. That would have to be done for each of the 120 iterations/Zernikes. Clearly, that is too much. Instead, the trick is simple. We do very few steps (like 5 or 7) and fit a curve over the points (a Gaussian, for instance). Then we compute its maximum analytically. The next animation shows a real PSF resulting from the execution of an NCPA algorithm.


Figure: Animation of the PSF in a real NCPA execution during NIRPS development

Another technical aspect that the NCPA algorithm can help is when we don’t have a mirror that can be made “flat” via command. Some Mirrors have piezo actuators and we do not control exactly their position. Se send each actuator to a position but we have no feedback wether it is in he right place. This kind of mirror is very difficult to be made “flat”. With the help of NCPA algorithm we can correct for that actuator error also. The next figure shows a plot of a real NCPA result. Each line in the plot is a Zernike / iteration. The fitted Gaussian can also be seeing for each mode.


Figure: Example os real measurements in an NCPA performed on the NIRPS instrument

To be continued…

In the next post we see the part II of this big post/project

References

  • Adaptive Optics – https://en.wikipedia.org/wiki/Adaptive_optics
  • SVD – https://en.wikipedia.org/wiki/Singular_value_decomposition
  • KL Transform – https://en.wikipedia.org/wiki/Karhunen–Loève_theorem
  • Zernike Polynomials – https://en.wikipedia.org/wiki/Zernike_polynomials
  • Fourier Optics – https://en.wikipedia.org/wiki/Fourier_optics
  • New concepts for calibrating non-common path aberrations in adaptive optics and coronagraph systems – https://arxiv.org/pdf/1909.05224.pdf
0

Reading vector graphics

Introduction

In this post I’ll describe a little implementation of a custom “format file converter”. Its a small python script to convert files in the Wacom WILL file format to SVG. Wacom is a company that manufactures drawing tablets. A couple of months ago I purchased a Wacom Bamboo Slate [1]. I always wanted to have an A4 sized drawing tablet to be able to doodle around and have all stored digitally. The Bamboo Slate sounded as a perfect solution for my case because, with it, you write or draw in normal paper and the tablet captures the strokes while you write/draw.

One of the specifications I demanded before buying the tablet was the ability to save in the SVG format. SVG is vectorial, open and XML based and we have APIs to read SVG practically in all languages. The Bamboo Slate had that option in the app, so I decided to go for it.

One of the reasons I wanted to have a vector format for the digital versions of the doodles was to be able to “time lapse” the work. For drawings, that would generate a cool movie of the drawing being made, but for the writing it would have a more important purpose. I use to do my maths in paper. It helps me to “flow” when I deducing a formula or just expanding an equation. Most of the time, I want to keep what I wrote. Scanning in general was the only solution. The problem with scanning is that I loose the “time” of the writings (like the sequence of formulas that some times gets “distributed” across the page 😅)

The problem

With the tablet in hand, it was time to do the first test. So I doodle a bit with the pen and synchronised the tablet with the app which we can use to export. When I exported to SVG I had an unfortunate surprise. The lines are not SVG line elements. Since the pen is pressure sensitive, in order to export the “thickness” of the traces, the SVG contains, for each tree, a polygon with the shape of the line, not the line itself 😢.

So, I didn’t have the information I wanted, which was the “path” followed by the pen. Rather, I had a bunch of “worm shaped” polygons that “drew” each like segment. What I wanted was, for each path, I would like to have a bunch of points and, for each point, the “pressure” of the pen for that point. I did not believe that the Wacom would store the lines as shapes! So, I tried to understand the e-ink format that Wacom uses (WILL file format). This was supposed to be an universal and vectorial format specialised in storing drawing strokes. And it it! It turned out that the WILL format stores exactly as I wanted. Figure 1 tries to illustrate the difference.

It seems that SVG do not have the specification generic enough to store pen and pressure data. But that was a pain, because SVG is text XML based file so its very easy to read. WILL, on the other hand is a binary package that stores a bunch of extra information that I didn’t care about (like images snapshots, descriptors, etc).

Formats and standards

For what I intended, the pressure data was not important. I wanted the path points to be able to do my time lapses and/or any other business that would require the geometry of the traces (not the “drawing”). Hence, I was still interested in exporting to SVG, but this time, each stroke would be a line not a little digital “worm”. It turned out that both formats would store path as bezier curves, so that was good for me. However, I had to read the WILL file in order to extract only the bezier’s control points and save them to SVG.

SVG

The format SVG is a pretty big and versatile format for vector graphics. It is basically an XML file with elements that represent shapes, lines, etc. For this project, I just wanted to store bezier segments. That can be done with the tag <path stroke=”…” d = “…” > </path>. In this tag, stroke is the parameter for the color of the line (as a simple html color tag like #000000) and d is the data for the points. The d parameter is a string with “pen commands” and coordinates. We use only the “M” and “C” commands. “M” is the move to command, which moves the “pen” to a given position. “C” is the “curve to” command, that draws a bezier curve with the given control points. So, the data “M 100 200 C 200 50 300 50 400 200” would draw the blue line bellow

WILL

The will file format is the format used by Wacom to save the strokes that we perform on the tablets. It is a format created specifically to store strokes from digital ink tablets. It can store, pen color, size, pressure, type and etc. As a file, it is a zip file containing a mini file system that stores a lot of information. It can have jpegs, svgs and others. In one of the directories there is a file that, in general, is called strokes.protobuf.

strokes.protobuf is the file that contains the data about the points where the pen passed. As explained in the Wacom website, the points are stored as splines that follows more or less the same behaviour of the bezier curves in the SVG. In the case of the splines the curve tangent is given by the neighbour points. Moreover, each stroke path has two parameters called start and end which mean a parametric variable “t” that normally is 0.0 for start and 1.0 for end. That can be used to describe the “speed” of the stroke (which I ignore in the project since I notice that, for my tablet, I always have 0.0 and 1.0).

All paths are stored as “messages” in the stroke file. The structure of those messages are

message Path {
    optional float startParameter [default = 0];
    optional float endParameter [default = 1];
    optional uint32 decimalPrecision [default = 2];
    repeated sint32 points;
    repeated sint32 strokeWidths;
    repeated sint32 strokeColor;
}

The “points” and “strokeWidths” are stored as “deltas” which means that they are the difference in position from the previous values.

Finally, the messages are stores in a simple “size” + “buffer” sequence as showed in the following table:

Description Bytes sequence
128bit varint Path 1 message length
bytes Path 1 message bytes
128bit varint Path 2 message length
bytes Path 2 message bytes
128bit varint Path n message length
bytes Path n message bytes

protocol buffers

After reading the format described in the Wacom WILL file description, I got really exited to implement reading of the WILL file and writing of the SVG. It looks pretty simple and intuitive! Well, if the story have ended there, I would not be writing this post, since it would be a boring post 😅.

As the extension of the stroke files suggests, the “binary format” of the data contained in the file is the called “protocol buffer”. This is where the fun really stated for me (fun in the sense of not being a trivial task to do). After a simple google search about this “protobuff” format, I found a google documentation explaining the whole thing.

Basically, a protocol buffer is a protocol (set of rules) that states how we decode information in the sequence of bytes in a buffer (array or sequence of bytes). I always liked “decoding” stuff from a sequence of bytes and had a lot of fun in the past trying to reverse engineering protocols in byte sequences. In particular, I used to “hack” files from saved games. I use to use hex editors to open the binaries from the game save files and looked for places where the number of life, number of coins or ammunition were stored. I remember being bad at Sim City because the first thing I always did was to save a initial city, open the saved file and increase the money to 2^32 dollars. Then I cut the taxes to 0, people were always happy and I had fun building all kinds of stuff with practically infinite money 😂😂😂.

But going back to this post, this time Google helped me with the dirty work. In in general, when a protocol is defined at “byte” level, things a simple. For instance, to read a float, you read 4 bytes and perform a “typecast” inverting or not the order of the bytes. In the case of the protocol buffer format, the protocol is “bit” based, in the sense that some times you have 4 bits to mean something the next 6 to mean something else. In that case, you have to read byte by byte and perform low level bit “wizardry” to get the information out. For that low level bit hocus pocus, Google made available a bit manipulation library in python that I could use to decode the buffers present in the strokes.protobuff file. But the library only had function to do the basic manipulations needed. The “walking” on the buffer and the reading of the data in a structured way had to be made by myself.

Here I’ll try to explain a basic decoding process of the strokes.protobuf using the google.protobuf api. The bit format of the protocol description can be found in the google protocol buffer page. The protocol uses the concept of messages to store data. The messages are key value pairs where the key specifies the type of the “value” in each message. All protobuf messages present in the strokes.protobuf are either sequence of bytes (to encode floats for example) or individual (or sequence) of varints values.

varints are variable size integers that are stored with a variable number of bytes. If the value is small it will use less bytes. So, the general procedure for reading a protobuffer is to, starting from the beginning of the buffer, read a key (stored as a varint) and, depending on the value and size of that varint key, advance to the next position on the buffer where you read the value for that key. To do that, there are basically three important commands:

google.protobuf.internal.decoder._DecodeVarint32
google.protobuf.internal.decoder.wire_format.UnpackTag
google.protobuf.internal.decoder.struct.unpack

_DecodeVarint32 This is function that do the binary wizardry to get an integer out of a buffer. It receives the buffer and a position to start the decoding of the number. As output, it returns the number itself that it had decoder, and something else. The big question is, what else could this function return? It returns the next position in the buffer (where you would find more data). This is exactly the point of Varint types. The size of the byte representation depends on the number itself!

UnpackTag This function takes an int32 value and interprets it as a protobuffer tag. It then returns the field and type of that tag. In fact field and type forms the “key” for the protobuffer key value pair. In the WILL protobuffer file, field is a number from 1 to 6 as explained before. Type is a number that indicates the type of number (see protobuffer reference). For instance type = 5 means a fixed 32bit number (could be a float).

unpack This last function do the “regular” (non variant) bit cast. It gets a buffer and a type string and returns an array with the decoded values. For instance if we have an 8 byte buffer and the type string é ‘fi’ it will return the first element as a float and the second as an int32.

So, in order to decode a WILL protobuffer, we basically keep reading the buffer package by package (each one with 6 fields) and concatenate the XY coordinates that comes out of each package. Each package, we read the size, then we repeat the following actions: _DecodeVarint32 To get field and type. UnpackTag (just to check, since we know what to expect) unpack To get the values (for field 1 and 2 get float, for field 3 get precision, for field 4 points etc…

For the fields 4, 5 and 6, you get a “string buffer” which is a protobuffer in itself. Hence, the first decoding is the length and the following values are points (if field is 4), stoke widths (if field is 5) and color (for field = 6). The next figure shows one package with colours separating the fields in a Hex Editor. Each Magenta byte is a Varint key and the following data is the value. We can see the 6 fields. Cyan bytes are lengths and we can see that some Values are buffer themselves which need lengths as first content.

Result

The result is now a small python script that receives a WILL file and spits out the Correct SVD file with the strokes as single lines instead of a stroke “shape”. Just to illustrate here goes an example of a file. The first figure shows the plot made in the python script (taking stroke width into account). The second figure shows the preview of the clean SVD file generated by the script.

References

[1] = Code on GitHub
[2] = Wacom Bamboo Slate
[3] – Wacom WILL file format
[4] – Google protocol buffers
[5] – Wikipedia – SVG File format

1+

Counting cells

Introduction

In this post I’ll talk about my largest project in terms of time. It started in the middle of year 2000 as my Master thesis subject and ended up extending itself until around mid 2013 (although it remained in hibernation for almost 10 years in between). The project was an “automatic cell counter”. Roughly speaking, it is a computer program that takes an image with two kinds of cells, and tells how many of each kind of cell there is in the image.

At the time, my advisors were professor Adrião Duarte and Agostinho Brito (two of the nicest people that exists is on this planet). Also, I had no idea about the biology side of things, so I had a lot of help from Dr. Alexandre Sales and Dra. Sarah Jane both from the “Laboratório Médico de Patologia” back in Brazil. They were the best in explaining to me all the medical stuff. Also, they provided a lot of images and testing for the program.

The “journey” was an adventure. From publishing my first international paper, going to Hawaii on my first plane flight, down to asking for funding to buy a microscope in the local financing agency (FAPERN). Although this won’t be a fully technical post, the goal of the post will be to talk about the project itself and things related to it during the .

The problem

The motivation for the project was the need for cell counting in biopsies exams. The process goes more or less like this; When you need a biopsy of some tissue, the pathologist prepares one or more laminas with very very thin sheets of the tissue. The tissue in those laminas are coloured with a special pigment. That pigment has a certain protein designed to attach itself to specifics substances present in the cells that are behaving differently like ongoing division, for instance. The result is that the pigment makes the cells (actually the nucleus) to appear in a different colour depending on the pigment used. Then, we have means to differentiate between the two “kinds” of cells in the lamina.

The next step is to count the cells and compute how many of them is coloured and how many are not. This ratio (percentage of coloured vs non-coluored) will be the result of the biopsy. The problem here is exactly the “counting” procedure. How the heck does one count individual cells? The straight forward way to do that is to put the lamina into the microscope, look into the objective and, patiently and with caution, start counting by eye. Here we see a typical image used in this process.

There are hundreds of nucleus (cells) in those kinds images. Even if you capture the image and count them in a screen, the process is tedious! Moreover, the result can’t be based on one single image. Each lamina can produce dozens or hundreds of images that are called fields. To give the result of an exam, several fields have to be counted.

Nowadays, there are machines that do this counting but they are still expensive. At the time I did the project, they were even more expensive and primitive. So, any contribution to this area that would help the pathologist to do this counting more precisely and easily would be very good.

The App

The solution proposed in the project was to make an application (a computer program at the time) that would do this counting automatically. Even at the time, it was relatively easy to put a camera into a microscope. Today it is almost a default 😄. So, the idea was to capture the image from the camera coupled in the microscope, give it as input to the program, and the program would count how many cells are in the image.

Saying it like this, makes it sound simple… But it is not trivial. The cells, as you can see in the previous sample image, are not uniformly coloured, there is a lot of artefacts in the image, the cells are not perfect circles, etc. Hence, I had to device an algorithm to perform this counting.

The algorithm is basically divided in 3 main steps that are very common in image processing solutions;
1 – Pre-Processing
2 – Segmentation
3 – Post-Processing
4 – Counting

Algorithm

The pre-processing stage consisted in applying a colour transform and performing an auto-contrast procedure. The goal was to normalise the images for luminance, contrast, etc. The segmentation step consisted in label each pixel as one of three categories: cell 1, cell 2 or intra-cellar medium (non-cell). The result would be an image where each pixel is either 1, 2 or 3 (or any arbitrary triple of labels). After that comes the most difficult part, which was the post-processing. The segmentation left several cells touching each other or produced a lot of artefacts like tiny regions where there were no cells but was marked as a cell. Hence, the main scientific contribution of the project was to make use of mathematical morphology to process the segmented image and produce an image with only regular pixels in the cells. Finally we have the counting step which resulted in the number of cells labeled as 1 or 2. The technical details of all those steps are in the Masters document or in the papers (see references).

Old windows / web version

As the final product of the Masters project, I implemented a windows program in C++ Builder with a very simple interface. Basically the user opens an image and press a button to count. That was the program used by Dr. Alexandre in his laboratory to validate the results. I found the old code and made available in GitHub (see references)

Together with the windows program, I also made an “on-line version” of the application. At the time, an online system that would count cells by uploading an image was a “wonder of technology”😂. Basically it was a web page powered by a PHP script that called a CGI-bin program that did the heavy duty work (implementation of the algorithm) and showed the result. Unfortunately I lost the web page and the CGI code. All I have now is a screenshot that I used in my masters presentation.

iOS

After the Masters defence, for a couple of months I still worked a bit to fix some small stuff and made some small changes here and there. Then, I practically forgot about the project. Dr. Alexandre kept using it for a while but no modifications were made.

Almost 10 years later (around 2012), I was working with iOS development and had the idea of “porting” the program to a mobile platform. Since the iPhone (and iPad) have cameras, maybe we could use to take a picture directly in the objective of the microscope and have the counting right there! So, I did it. I made an app called PathoCounter for iOS that you could take a picture and process the image directly into the device.

The first version was working only on my phone. Just to be a proof of concept (that worked). Then, I realize that the camera on the phone was not the only advance that those 10 years of moore’s law gave me. There was a godzilion of new technology that I could use in the new version of the project! I could use the touch screen to make the interface easy to operate, and allow the user to give more than just the image to the app. The user could paint areas that he knew there were no cells. We could use more than the camera to take images. We could use Dropbox, web, the gallery of the phone, etc. We could produce a report right on the phone and share with other pathologist. The user could have an account to manage his images. And so on… So I did most of what came to my mind and end up with a kind of “high end” cell counting app and made it available in the Apple store. I even put a small animation of the proteins antigens “fitting” in the cell body 🤣. For my surprise, people started to download it!

At first I hosted a small web system where the user could login and store images so he could use within the app. I tried to obey some rules for website design but I’m terrible at it. Nevertheless I was willing to “invest my time” to learn how a real software would work. I even learned how to promote a site on facebook and google 😅.

However, I feared that if I had many users the server could not be robust enough, so I tried a crowdfunding campaign at indiegogo.com to buy a good computer to make a private server. For my surprise, some friends baked up the campaign and, in particular, one of my former students Victor Ribas donated $500 dollars! I’m still thankful to him and I was very happy to see such support!! Unfortunately the amount collected wasn’t enough to buy anything different form what I had, so I decided do donate the money to a local Children Hospital that treats children with cancer. Since the goal of the app was to help out with the cancer fight, I figure that the hospital was a good choice for the donation.

Sadly, this campaign also attracted bad audience. A couple of weeks after the campaign closed and I donated the money, I was accused (formally) of doing unethical fund-gathering for public research. Someone in my department filed a formal accusation against me, and, among some things, told that I was obtaining people’s money from unsuccessful crowdfunding campaigns. That hurts a lot. Even though it was easy to write a defence because I didn’t even kept the money, I had to call the Hospital I did the donation and ask for a “receipt” to attach as a proof in the process. They had offered a receipt at the time I made the donation, but I never imagined that two weeks later I would go though this kind of situation. So, I told them that wasn’t necessary because I was not a legal person or company, and I didn’t want to bother them with the bureaucracy.

Despite this bad pebble in the path, everything that followed was great! I had more the 1000 downloads all around the world. That alone was enough to make me very happy. I also decided to keep my small server running. Some people even got to use the server, although just a handful. Another cool thing that happened was that I was contacted by two pathologists outside Brazil; one from Italy and another from United States. The app was free to download and the user had 10 free exams as a demo. After that the user could buy credits, which nobody did 😂 (I guess because you just could reinstall the app to get the free exams again 😂). I knew about that “bug”, but I was so happy to see that people were actually using the app that I didn’t bother (although Dr. Alexandre told me that I was a “dummy” to not properly monetise the app 🤣).

The app was doing so much more than I expected that I even did some videos explaining the usage. Moreover, I had about 2 or 3 updates. They added some small new features, fixed bugs, etc. In other words, I felt like I was developing something meaningful, and people were using it!


Results

As I said, the main result to me was the usage. To know that people were actually using my app felt great. Actually, the girl from United States that contacted me said that they were using it in dog exams! I didn’t even knew that they did biopsy in animals 😅.

Analytics

At the time I was into the iOS programming “vibe” I use to put a small module in all my apps to gather anonymous analytics data (statistics of usage). One of the data I gathered, when the user permitted of course, was the location were the app was being used. Basically every time you used the software, the app sent the location (without any information or identification of the user whatsoever) and I stored that location. After one or two year of usage, this was the result (see figure).

Each icon is a location where the app was used and the user agreed in sharing the location. The image speaks for itself. Basically all the continents were using the app. That is the magic of online stores nowadays. Anybody can have your app being used by, easily, thousands of people! I’m very glad that I lived this experience of software development 😃.

Media

Somehow the app even got the media’s attention. A big local newspaper and one of the public access TV channels here in my city, contacted me. They asked if they could write an article about the app and interview me about it 🤣😂. I told them that I would look bad in front of the camera but I would be very happy to talk about the app. Here goes the first page of the newspaper article in the “Tribuna do Norte” newspaper and the videos of the interviews (the TV Tribuna and TV Universitária UFRN).

Conclusion

I don’t know for sure if this app helped somebody to diagnose an early cancer or helped some doctor to prescribe a more precise dosage of some medicine. However, it shows that everyone has the potential to do something useful. Technology is allowing advances that we could never imagine (even 10 years before when I started my masters). The differences in technology from 2002 to 2012 are astonishing! From a Windows program, going through an online web app, to a worldwide distributed mobile app, software development did certainly changed a lot!

As always, thank you for reading this post. Please, feel free to share it if you like. See you in the next post!

References

Wikipedia – Immunohistochemistry

C++ Builder source

Sistema de Contagem e Classificação de Células Utilizando Redes Neurais e Morfologia Matemática – Tese de Mestrado

MARTINS, A. M.; DÓRIA NETO, Adrião Duarte ; BRITO JUNIOR, A. M. ; SALES, A. ; JANE, S. Inteligent Algorithm to Couting and Classification of Cells. In: ESEM, 2001, Belfast. Proceedings of Sixth Biennial Conference of the European Sorciety for Engineer and Medicine, 2001.

MARTINS, A. M.; DÓRIA NETO, Adrião Duarte ; BRITO JUNIOR, A. M. ; SALES, A. ; JANE, S. Texture Based Segmentation of Cell Images Using Neural Networks and Mathematical Morphology. In: IJCNN, 2001, Washington. Proceedings of IJCNN2001, 2001.

MARTINS, A. M.; DÓRIA NETO, Adrião Duarte ; BRITO JUNIOR, A. M. ; FILHO, W. A. A new method for multi-texture segmentation using neural networks. In: IJCNN, 2002, Honolulu. Proceedings of IJCNN2002, 2002.

2+

Doing better than Lego House

Introduction

In this post I’ll talk about a small experience I had with a Lego set that I acquired on my trip to Legoland in Billund, Denmark (yes, the original Legoland 😃). I’ll not talk about the trip itself, which was the best of my life. It was so because I had the opportunity to bring my daughter and my niece to see their happy faces when they entered in the park. Rather, I’ll talk about a small project that was motivated by a special LEGO set I bought on LEGO House.

Lego House (https://www.legohouse.com/)

The Lego House is amazing. It is located close to Legoland, also in Billund. I won’t talk about it because I would have to write a book 😄. The point is that, there is a Lego Store inside it, and they sell a very special LEGO set: the set #40179, or “Personalised Mosaic Picture”. It is a set that contains only a large base plate with 48×48 spaces (currently the largest in a set) and about 4500 1×1 plate divided in 5 different colours (yellow, white, black and two shades of grey).

Lego Custom Mosaic set

The goal of the set is to allow the owner to “draw” a mosaic picture with the 1×1 plates on the big base plate. What makes this set special is the instructions. The instructions are a big 1:1 picture the size of the plate itself which indicates the position and colour of all 1×1 little plates. You may ask: What picture comes in the set? yours!

The Lego Set

As I mentioned, it is your picture that comes with the instructions! When you buy the set in the store, they guide you to a small photo booth where they take your picture to make the instructions. The whole process is made for children and it is a piece of work 😆. First, they take some pictures of your face and show to you in a screen inside the booth. There, you see how the mosaic is going to look like when you assemble it. When you are satisfied with the mosaic, you go out of the booth and wait your custom set. The set is delivered automatically via a small hole beside the cabin. While you wait, you watch an animation of a couple of LEGO mini figures dressed as workers and producing your set. When done, the booth spits out your big set with your picture turned into The LEGO instructions poster.

Doing that for my 3 year old

I could not leave The LEGO House whiteout buying one of those. Of course I would not put my picture on it. Instead, I wanted to make one for my little daughter. And that was when the problem arose. The lady in the store told me that it would be very very difficult to get a good picture for a 3 year old. First, she is not tall enough. Then, she would not be quiet enough and her little head would not be close enough to the camera to make a good mosaic. All that is true. Remember that the picture should be perfect because the mosaic would be a representation of her face using only 4 shades of grey (and possibly a yellow background) in a 48×48 pixel matrix.

But wait… What if I did the mosaic myself? This set is practically only “software”. Regardless of the picture, the set have 4500 1×1 plates and a giant 48×48 base and that’s it! The rest is image processing! I talked to the lady in the store and told her that I would take the risk of not having a good picture. She told me about all the difficulty, bla bla bla, but I insisted: “Trust me, I’m an engineer” (Just kidding, I told her I just want the 4500 1×1 plates 😅). She finally agreed. We even tried to get a “reasonable” mosaic, but it is really very very difficult. The lady was very nice and was very patient. We tried like 6 or 8 times but the best we could do was this:

Original Lego mosaic picture

I didn’t like the mosaic at all (too flat shapes) but remember that I had other plans 😇. Besides, my little one does not like pictures and it is rare to get a picture with her smiling on it. Anyways, with the set bought, I was ready to do my small weekend project with it as soon as I was back.

Doing a better mosaic

The first thing I had to do was to find a better picture to be turned into the mosaic. So I started to dig for a picture that I like (at least one with her smiling. 😅). It didn’t even needed to be a full picture. I could (and still can 😊) take a picture and use it. The picture would be reduced to 48×48 pixels anyway, so practically any picture where she appear, could be a potential mosaic candidate.

Crop of the picture used to make the first mosaic

With the picture chosen, it was time to do the magic. I’ll won’t detail exactly the steps I did to process the image because this is very dependent on the crop that you do and things like illumination, tones, etc. Depending on the picture, it would require a very specific and different set of steps. The result of the photoshop processing was the following

Result of the “mosaicfication” of the picture. Left: Greyscale picture 48×48. Right: Mosaic picture 48×48 with only 4 levels of grey (and a background)

Python code

Now that had the image is very small (48×48 pixels) with only 5 different “colours”. That is what makes this step not trivial. When the processing done, in theory I could print it on a large paper and start building the set. But I figure that I could easily mistake a shade of grey with another or get lost in the middle of the assembly. So, I decided to ask for computer help. I did a small python program that loads the image and aid me in the assembly process.

The script do two main things: First, it translates the “colours” of the image into something more readable like letters (A to E according to the shade of grey in the pixel). Then, it creates an “assembly” sequence that will help me to stick the small plates in the right place on the big base plate. This second step is what makes the assembly more “comfortable”. The idea was to show the image in the screen and highlight each line one at a time. In the highlight, instead of show the letter in the place where I should put each plate, it would show each “block” of the same colour, showing the amount of plates needed and the corresponding letter for each block. Then, it was a matter of put all this in a loop for each line and wait for a keypress at each step to continue the assembly.

Gif of the steps showed by the program. In each highlighted line, the blocks of pieces with the same colour have the indication of the letter and the amount of plates needed

The whole python code is very very simple.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Aug  8 18:16:32 2018

@author: allanmartins
"""
from matplotlib import pyplot as plt
import numpy as np

img = np.uint8(255*np.round(255*plt.imread('bebel48x48x5.png')))

values = np.unique(img)

for (i, v) in zip(range(len(values)), values):
    img[img==v] = 5-i


fig = plt.figure(figsize=(20, 20))

line = -0.5
for I in img: 
    plt.cla()

    arrayTmp = np.concatenate((np.array([1.0],dtype=np.float32),np.abs(np.diff(I))))
    
    arrayTmp = np.greater_equal(arrayTmp, 0.5)
    
    idx = [i for (i, v) in zip(range(len(arrayTmp)),list(arrayTmp)) if v>0]
    levels = I[idx]
    idx.append(48)
    
#    fig = plt.figure(figsize=(15, 15))
    plt.set_cmap('gray')
    plt.imshow(img, vmin=1, vmax=6)

    for i in range(len(idx)-1):
        lineColor = [1, 0.2, 0.2]
        textColor = [1, 1, 1] if not levels[i] == 5 else [0, 0, 0]
        plt.plot([idx[i]-0.5, idx[i+1]-0.5, idx[i+1]-0.5, idx[i]-0.5, idx[i]-0.5], [line, line, line+1, line+1, line], color=lineColor, linewidth=4)
        plt.text(-0.75+(idx[i]+idx[i+1])/2.0, line+0.75, '%c : %d'%(levels[i]+64, idx[i+1]-idx[i]), color=textColor, size=15)

    

    plt.show(block=False)
    fig.canvas.draw()
    fig.canvas.flush_events()
    
    
    
    sQtd = ''
    sLevels = ''
    for d,l in zip(np.diff(idx), levels):
        sQtd += '%4d'%d
        sLevels += '%4c'%(l+64)
        
    print(sQtd + 'n' + sLevels)
    print('n')
    input()
    
    line += 1

The result

As result, I have the infra-structure now to make any mosaic with any picture. I just have to be patient to disasbemble the old one and follow the script to assemble a new one. The final result for the first image is here (and a time lapse of the assembly). I don’t remember exactly the time it took to finish, but by the number of frames in the time lapse I estimate about 2h for the whole process.

Custom Lego mosaic picture

Time lapse of the assembly process

Conclusion

Despite the title of this post, LEGO is making an awesome work with the set. If you google for images of mosaics you will see that, in general, if you follow the guidelines, your LEGO mosaic will be cool! it is very difficult for a consumer product to be perfect with this kind of technology (image processing, colour quantisation and etc). So, it is good to know some coding in cases like this, which allow us to take advantage of the “hardware” of the set (4500 1×1 plates) and do our own custom “custom LEGO Mosaic set” 😀.

I hope you liked this post! See you in the next one!

2+

We take gears for granted…

Introduction

Since I was a kid, I always loved toys with “moving parts”. In particular, I was always fascinated with the transmission of the movement between two parts of a toy. So, whenever a toy had “gears” in it, for me, it looked “sophisticated”. This fascination never ended and a couple of year ago when I bought my 3D printer, I wanted to build my own “sophisticated” moving parts and decided to print some gears.

I didn’t want to just download gears from thingverse.com and simple print them. I wanted to be able to design my own gears in the size I want with the specifications I needed. Well… I’d never thought a would have so much fun…

So, in this post I will talk briefly about this little weekend project of mine: Understanding a tiny bit about gears!

The beautiful math behind it

At first I thought that the only math that I needed was the following

    \[\frac{{{\omega _1}}}{{{\varpi _2}}} = \frac{{{R_2}}}{{{R_1}}}\]

In another words, the ratio of the angular speeds is proportional to the ratio of the radius of the gears. Everyone know that, right? If you have a large gear and a small one, you have to rotate the small one a lot to rotate the big one a little. So, to make a gear pair that doubles the speed I just had to make one with, say, 6cm of diameter and the other with 3cm. Ha… but how many teeth? What should be their shape? Little squares? Little triangles? These questions started to pop and I was getting more and more curious.

At first I thought I use a simple shape for the teeth. I would not put too many teeth because they could wear out with use. Also, not too few because they had to be big and I might end up with too much slack. Gee, the thing started to get complicated. So, I decided to research a bit. For my surprise I found out that the subject “gears” is a whole area in the field of Mechanical Engineering. First thing I notice is that most of the gears have a funny rounded shape for the teeth. Also, they had some crops and hollow spaces in the teeth that vary from gear to gear. Clearly those were there for a reason.

Those “structures” are parameters of the gear. Depending on the use, material or environment, they can be chosen at design time to optimise the gear function. However, practically all gears have one thing in common : The contact of the gear teeth must be optimum! It must be such that during rotation, the working gear tooth (the one that is being rotated) must “push” the other gear tooth in such a way that the contact is always perpendicular to the movement as if we had a linear or straight object pushing another in a straight movement. Moreover, the movement must be such that when one tooth leaves the contact region, another tooth must take over, so the movement is continuous (see figure).

Figure 1: Movement being transmitted from one tooth to another. The contact and the force is always perpendicular and is continuously transmitted as the gear rotates. (source: Wikipedia. See references)

This last requirement can mathematically be archived by a curve called “Involute”. Involute is a curve that is made when you take a closed curve (a circle for example) and start to “peel of” it’s perimeter. The “end” of the peeling is the points in the involute (see figure).

Figure 2: Two involute curves (red). Depending on the starting point, we have different curves. (source: Wikipedia. See references)

So, the shape of the teeth of the gears are involutes of the original gear shape. Actually not all gears are like that. The gears that are, are called “Involute Gears”. Moreover, gears do not need to be circular. If you have a gear with another shape, you just need to make its teeth curved as the involute of the shape of the gear.

The question now is: How to find the involute of a certain shape?

Following the “peeling” definition of the involute, it is not super hard to realise that the relationship between the involute and its generating curve is that, at each point in the involute curve, a line perpendicular to it will be tangent to the original curve. With that, we have the following relation:

    \[\begin{array}{*{20}{l}}   {\dot X(t)\sqrt {{{\dot x}^2}(t) + {{\dot y}^2}(t)}  = \dot y(t)\sqrt {{{(x(t) - X(t))}^2} + {{(y(t) - Y(t))}^2}} } \\    {\dot Y(t)\sqrt {{{\dot x}^2}(t) + {{\dot y}^2}(t)}  =  - \dot x(t)\sqrt {{{(x(t) - X(t))}^2} + {{(y(t) - Y(t))}^2}} }  \end{array}\]

Which is a weird looking pair of differential equations! There, (X(t), Y(t)) is the parametric curve for the involute and (x(t), y(t)) is the parametric curve for the original curve.

Some curves do simplify this a lot. For instance, for a circle (that will generate a normal gear) the solution is pretty simple:

    \[\begin{array}{l} X(t) = r\left( {\cos (t) + t\sin (t)} \right)\\ Y(t) = r\left( {\sin (t) - t\cos (t)} \right) \end{array}\]

Matlab fun

So, how do we go from the equation of one involute to the drawing of a set of teeth?

To test the concept, first I implemented some Matlab scripts to draw and animate some gears. In my implementation I used a circular normal gear, so the original curve is a circle. Nevertheless, I implemented the differential equation solver using Euler’s method so I could make weird shape gears (in a future project). The general idea is to draw one pieces of involutes starting at each tooth and going to half of the tooth. Then, draw the other half starting from the end of the tooth until the end of it. The “algorithm” to draw a set of involute teeth is the following

  • Define the radius and number of teeth
  • define t = \theta and dt = d\theta (equal to some small number that will be the integration step)
  • Repeat for each n from 0 to N-1 teeth:
    • Start with \theta = n\frac{2\,\pi}{N}
    • update the involute curve (Euler’s method)
      X(\theta) = X(\theta) + {\dot X(\theta)}\,d\theta
      Y(\theta) = Y(\theta) + {\dot Y(\theta)}\,d\theta
      until half of the current tooth
    • Update the angle in the clockwise duration \theta = \theta + d\theta
    • Start with \theta = (n+1)\frac{2\,\pi}{N}
    • update the involute curve
      X(\theta) = X(\theta) + {\dot X(\theta)}\,d\theta
      Y(\theta) = Y(\theta) + {\dot Y(\theta)}\,d\theta
      until half of the current tooth
    • Update the angle in the anti-clockwise duration \theta = \theta - d\theta

If we just follow the algorithm above, our gear, although functional, won’t appear as a traditional gear. All the auxiliary parameters are not present (like the slack between teeth, cropping, etc). It will basically be a flower like shaped thing (see figure bellow).

Figure 3: Example of a gear with all auxiliary parameters equal to zero

When you use the technical parameters for a gear, it starts to shape familiar. The main parameters are called “addendum”, “deddendum” and “slack”. Each one control some aspect of the mechanical characteristics of the gear like flexibility of the teeth, depth of the teeth, etc. The next figure illustrates those parameters.

Figure 4: Parameters of a gear

The next step was to position a pair of gears next to each other. First, we choose the parameters for the first gear (addendum, radius, number of teeth, etc). For the second one, we choose everything but the number of teeth. That must be computed form the ratio of the radius.

For the positioning, we would put the first gear at the position (0.0, 0.0) and the second one at some position (x_0,0.0). Unfortunately the value of x_0 is not simply r_1 + r_2 (sum of the radius) because the involutes for the teeth must touch at a single point. To solve this, I used a brute force approach. I’m sure that there is a more direct method, but I was eager to draw the pair %-). Basically, I computed the curve for the right gear first tooth and the left gear first tooth (rotated 2\pi/N so they can match when in the right position). Initially I positioned the second one at r_1 + r_2 and displaced bit by bit until they touched only in only one place. In practice I had to deal with a finite number of points for each curve, so I had to set a threshold to consider that the two curves are “touching” each other.

With the gears correctly positioned it was time to make a 3D version of them. That was easy because I had the positions for each point in the gears. So, for each pair of points in each gear I “extruded” a rectangular patch by h_0 in the z direction. Then I made a patch with the gear points and copied another one at z = h_0. Voilà! Two gears that match perfectly!

The obvious next step was to animate them 😃! That is really simple because we just need to rotate the first by a small amount, rotate the second by this amount times the radius ration and keep doing that while drawing the gear over and over. The result is a very satisfying gear pair rotating!


Figure 5: Animations of two gear sets with different parameters.

Blender fun

Now that I got used to the “science”, I was ready to 3D print some gears. For that, I had to transfer the design to Blender, which is the best 3D software in the planet. The problem is that it would be a pain in the neck to design the gear in Matlab, export to something like stl format, test it in Blender and, if it’s not quite what I wanted, repeat the process, etc. So, I decided to make a plugin out of the algorithms. Basically I just implemented all the Matlab stuff in python, following the protocol for a Blender plugin. The result was a plugin that adds a “gear pair” object. The plugin adds a flat gear pair. In general we must extrude the shapes to whatever depth we need in the project. But the good thing is that we can tune the parameters and see the results on the fly (see bellow).


Figure 5: Blender plugin

With the gears in blender, the sky is the limit. So, it was time to 3D print some. I did a lot of tests and printed a lot of gears. My favourite so far is the “do nothing machine” gear set (see video) it is just a bunch of gears with different sizes that rotates together. My little daughter loves to keep rotating them 😂!


Figure 6: Test with small gears


Figure 7: Do nothing machine with gears

Conclusions

As always, the goal of my posts is to share my enthusiasm. Who would have thought that a simple thing like a gear would involve hard to solve differential equations and such beautiful science with it!

Thanks for reading! See you in the next post!

References

[1] – Wikipedia – Involute Gears
[2] – Wikipedia – Involute Curve
[3] – GitHUB – gears

2+

Blender + Kinect

Introduction

In this small project, I finally did what I was dreaming on doing since I started playing with the Kinect API. For those who don’t know what a Kinect is, it is a special “joystick” that Microsoft invented. Instead of holding the joystick, you move in front of it and it captures your “position”. In this project I interfaced this awesome sensor with an awesome software: blender.

Kinect

The Kinect sensor is an awesome device! It has something that is called a “depth camera”. It is basically a camera that produces an image where each pixel value is related to how far is the the actual image that the pixel represent. This is cool because, on this image, its very easy to segment a person or an object. Using the “depth” of the segmented person on the image, it is possible, although not quite easy, to identify the person’s limbs as well as their joints. The algorithm that does that is implemented on the device and consists of a very elaborated statistical manipulation of the segmented data.

Microsoft did a great job in implementing those algorithms and making them available in the Kinect API. Hence, when you install the SDK, you get commands that returns all the x,y,z coordinates of something around 25 joints of the body of the person in front of the sensor. Actually, it can return the joints of several people at the same time! Each joint relates to a link representing some limb of the person. We have, for instance, the left and right elbows, the knees, the torso, etc. The whole of the joints we call a skeleton. Hence, in short, the API returns to you, at each frame that the camera grabs, the body position of the person/people in front of the sensor! And Microsoft made It very easy to use!

With the API at hand, the possibilities are limitless! So, you just need a good environment to “draw” the skeleton of the person and you can do whatever you want with it. I can’t think of a better environment than my favorite 3D software: blender!

Blender side

Blender is THE 3D software for hobbyist like me. The main feature, in my opinion, is the python interface. Basically you can completely command blender using a python script inside the blender itself. You can create custom panel’s, custom commands and even controls (like buttons, sliders and etc). Within the panel, we can have a “timer” where you can put code that will be called in a loop without block the normal blender operations.

So, for this project, we basically have a “crash test dummy doll” made of armature bones. The special thing here is that this dummy joints and limbs are controlled by the kinect sensor! In short, we implemented a timer loop that reads the Kinect positions for each joint and set the corresponding bone coordinates in the dummy doll. The interface looks like this. There is also a “rigged” version where the bones are rigged to a simple “solid body”.

Hence, the blender file is composed basically of tree parts: the 3D body, the panel script and a Kinect interface script. The challenge was to interface the Kinect with python. For that I used two awesome libraries: pygame and pykinect2. I’ll talk about them in the next section. Unfortunately, the libraries did not like the python environment of blender and they were unable to run inside the blender itself. The solution was to implement a client server structure. The idea was to implement a small Inter Process Communication (IPC) system with local sockets. The blender script would be the client (sockets are ok in blender 😇) and a terminal application would run the server.

Python side

For the python side of things we have basically the client and the server I just mentioned. They communicate via socket to implement a simple IPC system. The client is implemented in the blender file and consists of a simple socket client that opens a connection to a server and reads values. The protocol is extremely simple. The values are sent in order. First, all the x’s coordinates of the joints, then y’s and then z’s. So, the client is basically a loop that reads joint coordinates and “plots” them accordingly in the 3D dummy doll.

The server is also astonishingly simple. It has two classes: the KinectBodyServer class and the BodyGameRuntime class. In the first one we have a server of body position that implements the socket server and literally “serves body positions” to the client. It does so by instantiating a Kinect2 object and, using the PyKinect2 API, asks for joint positions to the device. The second class is the pygame object that takes care of showing the screen with the camera image (the RGB camera) and handling window events like Ctrl+C, close, etc. It also instantiates the body server class to keep pooling for joint positions and to send them to the client.

Everything works synchronously. When the client connects to the server, it expects to keep receiving joint positions until the server closes and disconnects. The effect is amazing (see video bellow). Now, you can move in front the sensor and see the blender dummy doll imitate your moves 😀!

Motion Capture

Not long ago, big companies spent a lot of money and resources to “motion capture” a person’s body position. It involved dressing the actor with a suit full of small color markers and filming it from several angles. A technician would then manually correct the captured position of each marker.


(source: https://www.sp.edu.sg/mad/about-sd/facilities/motion-capture-studio)

Now, if you don’t need extreme precision on the positions, an inexpensive piece of hardware like Kinect and a completely free software like blender (and the python libraries) can do the job!

Having fun

The whose system in action can be viewed in the video bellow. The capture was made in real time. The image of the camera and the position of the skeleton have a little delay that is negligible. That means that one can use, even with the IPC communication delay and all, as a game joystick.

Conclusions

As always, I hope you liked this post. Feel free to share and comment. The files are available at GitHub as always (see reference). Thank you for reading!!!

References

[1] – blender-kinect
[2] – PyKinect2
[3] – PyGame

1+

Wavefront sensing

Introduction

Wavefront sensing is the act of sensing a wave front. As useless as this sentence gets, it is true and we perform this action with a wavefronts sensor… But if you bare with me, I promise that its going to be one of the coolest mix of science and technology that you will hear about! Actually, I promise that it would be even poetic, since that, in this post, you will learn why the stars twinkle in the sky!

Just a little disclaim before I start: Im not an optical engineer and I understand very little about the subject. This post is just myself sharing my enthusiasm about this amazing subject.

Well, first of all, the word sensing just mean “measure”. Hence, wavefront sensing is the act of measure a wave front with a wavefront sensor. To understand what a wavefront measurement is, let me explain what a wavefront is and why one would want to measure it. It has to do with light. In a way, wavefront measurement is the closest we can get from measuring the “shape” of a light wave. The first time I came across this concept (here in the Geneva observatory) I asked myself: “What does that mean? Doesn’t light has the form of its source??”. I guess thats a fair question. In the technical use of the term measurement, we consider here that we want to measure the “shape” of the wave front that was originated in a point source of light very far away. You may say: “But thats just a plain wave!” and you would be right if we were in a complete homogeneous environment. If that were the case, we would have nothing to measure %-). So, what happens when we are in a non-homogenous medium? The answer is: The wavefront gets distorted, and we got to measure how distorted it is! When light propagates as a plain wave and it passes through an non-homogeneous medium, part of its wavefront slows down and others don’t. That creates a “shape” in the phase of the wavefront. Because of the difference in the index of refraction of the medium, in some locations, the oscillation of the electromagnetic arrives a bit earlier and some other it arrives a bit later. This creates a diferente in phase at each point in space. The following animation tries to convey this idea.

Figure 1: Wavefront distorted by a medium

As we see in the animation, the wavefront is plain before reaching the non-homogeneous medium (middle part). As it passes through the medium, it exits with a distorted wavefront. Since the wavefront is always traveling in a certain direction, we are only interested in measure it in a plane (in the case of the animation, a line) perpendicular to its direction of propagation.

In the animation above, a measurement at the left side would result in a constant value since, for all “y” values of the figure, the wave arrives at the same time. If we measure at the left side we get something like this

Figure 2: Wavefront measurement for the medium showed in Figure 1

Of course that, in a real wavefront, the measurement would be 2D (so the plot would be a surface plot).

An obvious question at this point is: Why someone would want to measure a wavefront? There is a lot os applications in optics that requires the measurement of a wavefront. For instance, lens manufacturer would like to know if the shape of the lenses they manufactured is within a certain specification. So they shoot a plane wavefront through the lens of the glass and measure the deformation in the wavefront causes by the lens. By doing that, they measure the lens properties like astigmatism, coma and etc. Actually, did you ever asked yourself where those names (defocus, astigmatism, coma) came from? Nevertheless, one of the the most direct use of this wavefront measuring is in astronomy! Astronomers like to look at the stars. But the stars are very far away and, for us, they are an almost perfect point source of light. That means that we should receive a plain wave here right? Well if we did we have nothing to breath! Yes, the atmosphere is the worst enemy of the astronomers (thats why they spend millions and millions to put a telescope in orbit). When the light of the star or any object passes through the atmosphere, it gets distorted. Its wavefront make the objet’s image swirl and stretch randomly. Worst then that, the atmosphere is always changing its density due to things like wind, clouds and difference in temperatures. So, the wavefront follows this zigzags and THAT is why the stars twinkle in the sky! (I promise didn’t I? 😃) So, the astronomers build sophisticated mechanisms using adaptive optics to compensate for this twinkling and that involves, first of all, to measure the distortion of the wavefront.

Wavefront measurement

So, how the heck do we measure this wavefront? We could try to put small light sensors in an array and “sense” the electromagnetic waves that arrives at them… But light is pretty fast. It would be very very difficult to measure the time difference between the rival of the light in one sensor compared to the other (the animation in figure 1 is like a SUPER MEGA slow motion version of the reality). So, we have to be clever and appeal to the “jump of the cat” (expression we use in Brazil to mean a very smart move to solve a problem).

In fact, the solution is very clever! Although there are different kinds of wavefront sensors, the one I will explain here is called Shack–Hartmann wavefront sensor. The ideia is amazingly simple. Instead of measuring directly the phase of the wave (that would be difficult because light travels too fast) we could measure the “inclination” or the local direction of propagation of the wavefront. Now, instead of an array of light sensors that would try to measure the arrival time, we make an array of sensors that measure the local direction of the wavefront. With this information it is possible to recover the phase of the wavefront, but that is a technical detail. So, in summary, we only need to measure the angles of arrival of the wavefront. Figure 3 tries to exemplify that I mean:

Figure 3: Exemple of measurement of direction of arrival, instead of time of arrival.

Each horizontal division in the figure is a “sub-aperture” where we measure the inclination of the wavefront. Basically we measure the angle of arrival, instead of the time of arrival (or phase of arrival). If we do those sub-apertures small enough, we can reconstruct a good picture of the wavefront. Remember that for real wavefronts we have a plane, so we should measure TWO angles of arrival for each sub-aperture (the angle in x and the angle in y, considering that z is the direction of propagation).

At this point you might be thinking: “Wait… you just exchanged 6 for 12/2! How the heck do we measure this angle of arrival? It seems that this is far more difficult then to measure the phase itself!”. Now its time to get clever! The idea is to put a small camera in each sub aperture. Each tiny camera will be equipped with a tiny little lens to focus the portion of the wavefront to a certain pixel of this tiny camera! Doing that, at each camera, depending on the angle of arrival, the lens will focus the locally plain wave to a certain region of the camera dependent on the direction its traveling. That will make each camera see a small region with bright pixels. If we measure where those pixels are in relation to the center of the tiny camera, we have the angle of arrival!!! You might be thinking that its crazy to manufacture small cameras and small lenses in a reasonable packing. Actually, what happens is that we just have the tiny lenses in front of a big CCD. And it turns out that we don’t need hight resolution cameras in each sub-aperture. A typical wavefront sensor has 32×32 sub apertures, each one with a 16×16 grid os pixels. That is enough to give a very good estimation of the wavefront. Figure 4 shows a real image of a wavefront sensor.

Figure 4: Sample image from a real wavefront sensor

Simulations!

The principle is so simple that we can simulate it in several different ways. The first one is using the “light ray approach” for optics. This principle is the one we learn in college. Its the famous one that they put a stick figure on one side of a lens or a mirror and them draw some lines representing the light path. The wavefront sensor would be a bit more complex to draw, so we use the computer. Basically what I did was to make a simple blender file that contains a light source, something for the light to pass through (atmosphere) and the sensor itself. The sensor is just an 2D array of small lenses (each one made up by two spheres intersected) and a set of planes behind each lens to receive the light. We then set up a camera between the lenses and the planes to be able to render the array of planes, as they would be the CCD detectors of the wavefront sensor. Now we have to set the material for each object. The “detectors” must be something opaque so we could see the light focusing on it. The atmosphere have to be some kind of glass (with a relatively low index of refraction). The lenses also have glass as its material, but their index of refraction must be carefully adjusted so the focus of each one can be exactly on the detectors plane. Thats pretty much it (see figure 5).

Figure 5: Setup for simulating a wavefront sensor in blender

To simulate the action, we move the “atmosphere” object back and forth and render each step. As light passes through the object, it diffracts it. That changes the direction of the light rays and simulates the wavefront being deformed. The result is showed in figure 6.

Figure 6: Render of the image on the wavefront sensor as we move the atmosphere simulating the real time reformation of the wavefront.

Another way of simulating the wavefront sensor is to simulate the light wave itself passing through different mediums, including the leses themselves and driving at some absorbing material (to emulate the detectors). That sounds kind of hardcore simulation stuff, and it kind of is. But, as I said in the “about” of this blog, I’m curious and hyperactive, so here we go: Maxwells equations simulation of a 1D wavefront sensor. To do that simulation, I used a code that I did a couple of yeas back. I’ll make a post about it some day. It is an implementation of a FD-TD (Finite Diferences in the Time Domain) method. This is a numerical method to solve partial differential equations. Basically, you set up your environment as a 2D grid with sources of electromagnetic field, material of each element on the grid and etc. Then, you run the interactive method to see the electromagnetic fields propagating. Anyway, I put some layers of conductive material to contain the wave is some points and to emulate the detector. Then I added a layer with small lenses with the right refraction index to emulate the sub-apartures. Finally, I put a monochromatic sinusoidal source in one place and pressed run. Voilá! You can see the result in the video bellow.

The video contains the explanation of each element and, as you can see, the idea of a wavefront sensor is so brilliantly simple that we can simulate it at the Maxwells equation level!

DIY (physical) experiment

To finish this post, let me show you that you can even build your own wavefront sensor “mockup” at home! What I did in this experiment was to build a physical version of the Shack–Hartmann principle. The idea is to use a “blank wall” as detector and a webcam to record the light that arrives at the wall. This way I can simulate a giant set of detectors. Now the difficult part: the sub-aperure lenses… For this one, I had to make the cat jump. Instead of an array of hundreds of lenses in front of the “detector wall”, I uses a toy laser that comes with those diffraction crystals that projects “little star-like patterns” (see figure 7).


Figure 7: 1 – Toy laser (black stick) held in place. 2 – green “blank wall”. 3 – Diffraction pattern that generates a regular set of little dots.

With this setup, it was time to do some coding. Basically I did some Matlab functions to capture the webcam image of the wall with the dots. On those functions I setup some calibration computations to compensate for the shape and place of the dots on the wall (see video bellow). That calibration will tell me where the “neutral” atmosphere will project the points (centers of the sub apertures). Them, for each frame, I capture and process the image from the webcam. For each image, I compute the position of the processed centers with respect to the calibrated ones. With this procedure, I have, for each dot, a vector that points form the center to where it is. That is exactly the wavefront direction that the toy laser is emulating!

After that, it was a matter of having fun with the setup. I put all kinds of transparent materials in front of the laser and saw the “vector field” of the wavefront directions on screen. I even put my glasses and were able to see the same pattern the optical engineering saw when they were manufacturing the lenses! With a bit of effort I think I could even MEASURE (poorly of course) the level of astigmatism of my glasses!

The code is available in GitHub[2] but basically the loop consists in process each frame and show the vector field of the wavefront.

while 1
    newIcap = cam.snapshot();
    newI = newIcap(:,:,2);
    newBW = segment(newI, th);
    
    [newCx, newCy] = processImage(newBW, M, R);
    
    z = sqrt((calibratedCy-newCy).^2 + (calibratedCx-newCx).^2);
    zz = imresize(z,[480, 640]);
    
    figure(1);
    cla();
    hold('on');
    imagesc(zz(end:-1:1,:), 'alphadata',0.2)
    quiver(calibratedCy, calibratedCx, calibratedCy-newCy, calibratedCx-newCx,'linewidth',2);
    
    axis([1 640 1 480])
    
    drawnow()
end

Bellow you can watch the vídeo of the whole fun!

Conclusions

DIY projects are awesome for understanding theoretical concepts. In this case, the mix of simulation and practical implementation made very easy to understand how the device works. The use of the diffraction crystal in toy lasers is thought to be original, but I didn’t research a lot. If you know any experiment that uses this same principle, please let me know in the comments!

Again, thank you for reading until here! Feel free to comment o share this post. If you are an optical engineer, please correct me at any mistake I might have made on this post. It does not intend to teach about optics, but rather to share my passion for this subject that I know so little!

Thank you for reading this post. I hope you liked it and, as always, feel free to comment, give me feedback, or share if you want! See you in the next post!

References

[1] – Shack–Hartmann wavefront sensor

[2] – Matlab code

2+

Uroflowmetry…

Introduction

This post is about health. Actually, its about a little project that implements a medical device called Uroflowmeter (not sure if that is the usual English word). Its a mouth full name to mean “little device that measures the flow of pee” (yes, pee as in urine 😇).

Last year I had to perform an exam called Uroflowmetry (nothing serious, a lot people do that exam). The doctor said that, as the name implies, I had to measure “how fast I pee”. Technically it measures the flow of urine that flows when you urinate. At the time I thought to myself: “How the heck will he do that?”. Maybe be it was using a very sophisticated device, I didn’t know. Then he explained to me that I had to pee in a special toilet hooked up to a computer. He showed me the special device and left the room (it would be difficult if he stayed and watched hehehehehe). I didn’t see any sign of ultra technology anywhere. Then, I got the result and was nothing serious.

The exam result is a plot of flow vs time. Figure 1 shows an example of such exam. Since my case was a simple one, the only thing that matted in that exam was the maximum value. So, for me, the result was a single number (but I still had to do the whole exam and get the whole plot).

Figure 1 : Sample of a Uroflowmetry result[3]

Then, he prescribe a very common medicine to improve the results and told me that I would repeat the exam a couple of days later. After about 4 or 5 days I returned to the second exam. I repeated the whole process and peed on the USD $10,000.00 toilet. This time I did’t feel I did my “average” performance. I guess some times you are nervous, stressed or just not concentrated enough. So, as expected, the results were not ultra mega perfect. Long story short, the doctor acknowledge the result of the exam and conducted the rest of the visit (I’m fine, if you are wondering heheheh).

That exam got me thinking that it (the exam) did not capture the real “performance” of the act, by measuring only one flow. I might had notice some improvement with the medication, but I wasn’t so sure. So, how to be sure? Well, if only I could repeat the exam several times in several different situations… By doing that I could see the performance in average. I thought so seriously about that, that I asked the clinic how much does the exam costs. The lady in the desk said that it around $100,00 Brazil Reais ($30.00 USD, more or less in today’s rate). That was a bummer… The health insurance won’t simply pay lots of exams for me and if I were to make, say, 10 exams, it would cost me one thousand Brazil Reais. Besides, I would still have only 10 data points.

The project

Thinking about that, maybe I could measure it myself…? Then I decided to make my own “Urine flow meter”. it would be a Hardware / Software project, so I mediately though of using an Arduino (figure 2). The trick part is to measure the flow of something like urine. The solution I came up was to make an Arduino digital scale to measure the weight of the pee while I’m peeing on it (awkward or disgusting as it sounds %-). Then, I could measure the weight in real time at each, say, 20 to 50ms. Knowing the density of the fluid I’m testing I could compute the flow by differentiating the time measurement. Google told me that the density of urine is about 1.0025mg/l (if you are curious, its practically water actually). Later on I discovered that, that is exactly how some machine works, including the one I did the exams.

Figure 2 : Arduino board

First I had to make the digital scale, so I used a load cell that had the right range for this project. The weight of the urine would range from 0 to 300~500mg more or less, so I acquire a 2Kg range load cell (Figure 3 and 4). I disassemble an old kitchen scale to use the “plate” that had the right space to hold the load cell and 3D printed a support in a heavy base to avoid errors in the measurements due to the flexing of the base. For that, I used a heavy ceramic tile that worked very well. The cell “driver” was an Arduino library called HX711. There is a cool youtube tutorial showing how to use this library [1].

Figure 3 : Detail of the load cell used to make the digital scale.

Figure 4 : Load cell connected to the driver HX711

The next problem was how to connect to the computer!!! This is not a regular project that you can assemble everything on your office and do tests. Remember that I had do pee on the hardware! I also didn’t want to bring a notebook to the bathroom every time I want to use it! the solution was to use a wifi module and communicate wirelessly. Figure 5 shows the module I used. Its an ESP8266. Now I could bring my prototype to the bathroom and pee on it while the data would be sent to the computer safely.

Figure 5 : ESP8266 connection to the Arduino board.

Figure 6 : More pictures of the prototype

Once built, I had to calibrate it very precisely. I did some tests to measure the sensibility of the scale. I used an old lock and a small ball bearing to calibrate the scale. I went to the laboratory of Metrology at my university and a very nice technician in the lab (I’m so sorry I forgot his name, but he as very very kind) measures the weights of the lock and the ball bearing in a precision scale. Then I started to do some tests (see video bellow)

The last thing to make the prototype 100% finished was to have a funnel that had the right size. So I 3D printed a custom size funnel I could use (figure 7).

Figure 7: Final prototype ready to use

Once calibrated and ready to measure weights in real time, It was time to code the flow calculation. At first, this is a straight forward task, you just differentiate the measurements in time. Something like

    \[f[n] = \frac{{w[n - 1] - w[n]}}{{\rho \Delta t}}\]

Where f[n] and w[n] are the flow and weight at time n. \rho is the density and \Delta t is the time between measurements.

Unfortunately, that simple procedure is not enough to generate the plot I was hoping for. That’s because the data has too much noise. But I teach DSP, so I guess I could do some processing right 😇? Well I took the opportunity to test several algorithms (they are all in the source code at GitHub[2]). I won’t dive into details here about each one. The important thing is that the noise was largely because of the differentiation process so I tested filtering before and after compute the differentiation. The method that seemed to give better results was the spline method. Basically I got the flow data and averaged N fitted downsampled data with a spline. One sample of the results can be seen in figure 8.

Figure 8 : Result of a typical exam done with the prototype.

The blue line is the filtered plot. The raw flow can be seen in light red. The black line tells where the max flow is and the blue regions estimates the non-zero flow (if you get several blue regions it means you stop peeing a lot in the same “go”). In the top of the plot you have the two numbers that the doctor is interested: The max flow and the total volume.

With everything working, it was time to use it! I spent 40 days using it and collected around 120 complete exams. Figure 9 shows all of them in one plot (just for the fun of it %-).

Figure 9 : 40 days of exams in one plot

Obviously, this plot is useless. To just show a lot of exams does not mean too much. Hence, what I did was to do the exams for some days without the medicine and then with the medicine. Then I separated only the max flow for each exam and plotted over time. Also, I computed the mean and the standard deviation for the points with and without the medicine and plotted in the same graph. The result is showed in figure 10.

Figure 10 : Plot of max flow versus time. Red points are the results of max flow whiteout the medicine. The blue ones are the result with the medicine.

Well, as you can see, the medicine works a bit for me! The average for each situations is different and they are within more or less one standard deviation from each other. However, with this plot you can see that some times, even with the medicine, I got some low values… Those were the days I was not at my best for taking the water out of my knee. On the other hand, at some other days, even without the medicine, I was feeling like the bullet that killed Kennedy and got a good performance! In the end, statistically, I could prove that indeed the medicine was working.

Conclusions

I was very happy that I could do this little project. Of course that this results can’t be used as a definite medical asset. I’m not a physician, so I’m not qualified to take any action whatsoever based on those results. I’ll show it to my doctor and he will decide what to do with them. The message I would like to transmit with it to my audience (students, friend, etc) is that, what you learn at the university is not just “stuff to pass on the exams”. They are things that you learn that can help a lot if you really understand them. This project didn’t need any special magical skill! Arduino is simple to use, the sensor and the wifi ESP have libraries. The formula to compute flow from weight is straight forward. Even the simple signal filtering that I used (moving average) got good results for the max flow value. Hence, I encourage you, technology enthusiast to try things out! Use the knowledge that you acquired in your undergrad studies! Its not only useful, its fun! Who would have thought that, at some point in my life, I would pee on one of my projects!!! 😅

Thank you for reading this post. I hope you liked it and, as always, feel free to comment, give me feedback, or share if you like! See you in the next post!

References

[1] – Load cell
[2] – Urofluximeter GitHub
[3] – NIDHI Uroflow

2+