# Perl Weekly Challenge 46 Task #2

This weeks Perl Weekly Challenge Task #2 has an interesting solution. The challenge is as follows:

### Is the room open?

There are 500 rooms in a hotel with 500 employees having keys to all the rooms. The first employee opened main entrance door of all the rooms. The second employee then closed the doors of room numbers 2,4,6,8,10 and so on to 500. The third employee then closed the door if it was opened or opened the door if it was closed of rooms 3,6,9,12,15 and so on to 500. Similarly the fourth employee did the same as the third but only room numbers 4,8,12,16 and so on to 500. This goes on until all employees has had a turn.

Write a script to find out all the rooms still open at the end.

## Solution

The solution is given in four steps.

Given a single room with number $N$. How many times did an employee open or close the door of that room? Let’s look at the $k$-th employee. The employee opens or closes the door if and only if $k$ is a divisor of $N$. So if we want to know how often the door has been opened or closed we must count the number of divisors of the room number $N$.

If the room number $N$ has been opened or closed an even number of times, the door is closed. Likewise, if the room number $N$ has been opened or closed an odd number of times, the door is open. So this problem is equivalent to finding all $N$ equal or below 500 that have an odd number of divisors.

Take the prime decomposition of $N$
$$N = p_1^{k_1} p_2^{k_2} … p_i^{k_i}.$$
The number of divisors of $N$ is given by $(k_1 + 1)(k_2 + 1)…(k_i + 1)$.

Using the prime decomposition of $N$, the number of divisors of $N$ is odd if and only if $k_1$, $k_2$, …, $k_i$ are all even. Because, if some $k_j$ would be odd, $k_j + 1$ is even, hence every product with $k_j + 1$ (such as the number of divisors of $N$) is even. But if $k_1$, $k_2$, …, $k_i$ are all even, $N$ is a squared number.

Therefore, the only open rooms are the rooms with a squared number equal or below 500. A one-liner in Raku (Perl 6) is:

say $_**2 for 1..(500.sqrt); # Install (x)Ubuntu 18.04 on an Asus Zenbook 13 Recently I bought an Asus Zenbook 13 RX333FN. I removed Windows 10 and installed xUbuntu 18.04 on the laptop. However, not every feature I use works out of the box. In these notes explain how to fix this. Probably these notes will apply for similar Asus Zenbooks as well. I expect that these fixes become obsolete with never versions of Ubuntu. Don’t forget to check the ArchLinux wiki on the Asus Zenbook UX333 as well. ## Enable deep suspend mode The Zenbook doesn’t go into deep suspend mode automatically when I close the lid. To enable deep suspend mode open /etc/default/grub, and add to GRUB_CMDLINE_LINUX_DEFAULT the line mem_sleep_default=deep. In my case I have GRUB_CMDLINE_LINUX_DEFAULT="quiet splash mem_sleep_default=deep" ## (partially) Reduce power usage and heat The laptop overheats quickly. I followed the Most Effective Ways To Reduce Laptop Overheating In Linux. Still the cpu easily heats up to 40 to 50 Celsius. I’m guessing that this is the downside of having such a small laptop. ## Fix sound The sound card is supported in the Linux operating system 4.20 and onward. I installed the 4.20.17 kernel using ukuu. sudo add-apt-repository ppa:teejee2008/ppa sudo apt update sudo apt install ukuu sudo ukuu-gtk # Install Kernel 4.20.17 In order to boot the latest Linux operating system, navigate to BIOS, go to “Advanced Menu”, then the “Security” tab and set “Secure Boot” to Off. ## Enable Nvidia CUDA support To enable the Nvidia CUDA support, first disable the nouveau driver. Open /etc/modprobe.d/blacklist-nvidia-nouveau.confand add blacklist nouveau options nouveau modeset=0 Next open Software & Updates in the menu, go to Additional Driversand select nvidia-driver-418 (open source). Click on Apply Changes. Go to the Nvidia home page and download CUDA Toolkit and cuDNN. To download cuDNN you’re required to create an Nvidia account. Install the packages: sudo dpkg -i cuda-repo-ubuntu1804-10-1-local-10.1.168-418.67_1.0-1_amd64.deb sudo dpkg -i libcudnn7_7.6.0.64-1+cuda10.1_amd64.deb Reboot your machine and check if CUDA works. > nvidia-smi Mon Jun 10 15:02:51 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce MX150 On | 00000000:02:00.0 Off | N/A | | N/A 57C P0 N/A / N/A | 111MiB / 2002MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 3348 G /usr/lib/xorg/Xorg 109MiB | | 0 6656 G /usr/lib/firefox/firefox 1MiB | +-----------------------------------------------------------------------------+ ## Retrieve the Windows 10 license key The zenbook comes with a Windows 10 license key hardcoded in the firmware. You can extract the license key as follows sudo grep -aPo '[\w]{5}-[\w]{5}-[\w]{5}-[\w]{5}-[\w]{5}' /sys/firmware/acpi/tables/MSDM # The estimator for the variance of$m$normal distributions Fix a variance$\sigma^2 > 0$and let$n > 0$and$\mu_1, \mu_2, …, \mu_n \in \mathbb{R}$. Given$m = m_1 + m_2 + … + m_n$samples $$\begin{split} X_{11}, X_{12}, …, X_{1m_1} &\sim N(\mu_1, \sigma^2), \\ X_{21}, X_{22}, …, X_{2m_2} &\sim N(\mu_2, \sigma^2), \\ &\vdots \\ X_{n1}, X_{n2}, …, X_{nm_n} &\sim N(\mu_n, \sigma^2), \\ \end{split}$$ where$m_1, m_2, …, m_n > 1$. What is the best estimator for the variance$\sigma^2$? In other words, we draw samples from$n$normal distributions with the same variance, but with not necessary equal means. What is the best way to estimate the variance for the$n$normal distributions? # Main result Surprisingly there is an answer. It is surprisingly, because we are looking for the best estimator for the variance. Beforehand there is no grantee that a best solution actually exists. Moreover the answer is simple, though a bit harder to prove. Define$X = (X_{ij})$to be the joint distribution of$\{X_{ij}\}$. The unique uniformly minimum-variance unbiased estimator (UMVUE) for$\sigma^2$is given by $$S(X) = \frac{1}{m – n} \sum_{i = 1}^n \sum_{j = 1}^{m_i} (X_{ij} – \overline{X}_{i})^2,$$ where$\overline{X}_i = \frac{1}{m_i} \sum_{j = 1}^{m_i} X_{ij}$. In other words, the best estimator for$\sigma^2$is$S(X)$. # An example Fix$\sigma = 0.35$. Suppose $$\begin{split} X_1 &= (1.69, 1.35, 1.75, 1.45, 1.77) \sim N(1.5, \sigma^2), \\ X_2 &= (2.27, 1.73, 1.46) \sim N(2.0, \sigma^2), \\ X_3 &= (-0.28, 0.32, 0.69, 0.14) \sim N(0.1, \sigma^2) \end{split}$$ We compute $$\begin{split} m &= |X_1| + |X_2| + |X_3| = 5 + 3 + 4 = 11, n = 3, \\ \overline{X}_1 &\approx 1.60, \overline{X}_2 \approx 1.82, \overline{X}_3 \approx 0.21. \end{split}$$ According to the main result we approximate the variance$\sigma^2$as follows $$\begin{split} \frac{1}{m – n} \sum_{i = 1}^n \sum_{j = 1}^{m_i} (X_{ij} – \overline{X}_{i})^2 &= \frac{1}{11 – 3} ( (1.69 – 1.6)^2 + (1.35 – 1.6)^2 + (1.75 – 1.6)^2 \\ &\qquad + (1.45 – 1.6)^2 + (1.77 – 1.6)^2 + (2.27 – 1.82)^2 \\ &\qquad + (1.73 – 1.82)^2 + (1.46 – 1.82)^2 + (-0.28 – 0.21)^2 \\ &\qquad + (0.32 – 0.21)^2 + (0.69 – 0.21)^2 + (0.14 – 0.21)^2) \\ &\approx 0.12 \end{split}$$ Then the standard deviation is approximated by$\sigma \approx 0.346$. Not bad! # The proof Before we can start with the proof of the main result we have to introduce some Definitions and Theorems from statistics. ## Sufficient statistic A statistic T(X) is sufficient for the underlying parameter$\theta$if and only if the conditional probability distribution of the data$X$, given statistic$T(X)$, doesn’t depend on parameter$\theta$. We have the following equivalent definition for sufficiency of a statistic$T(X)$. ## Theorem (Fisher-Neyman factorization) Let$f_{\theta}(x)$be the probability density function for$X$.$T(X)$is sufficient for underlying parameter$\theta$if and only if there exists non-negative functions$g$and$h$such that $$f_{\theta}(x) = h(x) g_{\theta}(T(x)),$$ where$h(x)$doesn’t depend on$\theta$, and$g_{\theta}(T(x))$depends only on$\theta$and$T(x)$, but not on$x$. ## Complete statistic A statistic T(X) is complete if and only if for every distribution of the data$X$and every measurable function$g$$$\forall \theta: E_{\theta}(g(T)) = 0 \rightarrow \forall \theta: P_{\theta}(g(T) = 0) = 1.$$ In other words, if the expected value of$g(T)$is zero,$g(T) = 0$almost everywhere on$X$. ## Theorem (Lehmann-Scheffe) The Lehmann-Scheffe theorem provides us the tool to prove when a sufficient and complete statistic for$\theta$is a UMVUE for$\theta$. Let$X = (X_1, X_2, …, X_n)$be random samples from a distribution with probability distribution$f_{\theta}(x)$. Suppose$T$is a sufficient and complete statistic for$\theta$. If$E(T(X)) = \theta$, then$T(X)$is the unique uniformly minimum-variance unbiased estimator (UMVUE) for$\theta$. ## Lemma 1 Given independent samples$X_1 \sim N(\mu_1, \sigma^2)$,$X_2 \sim N(\mu_2, \sigma^2)$, then the statistic$S(X_1, X_2) = X_1 X_2$and$T(X_1, X_2) = X_1 + X_2$are complete. ### Proof We prove the completeness of$S$. The completeness of$T$is left as an exercise. Fix$\theta = (\mu_1, \mu_2, \sigma^2)$. Let$g$be a measurable function such that $$E_{\theta}(g(T(X))) = E_{\theta}(g(X_1 + X_2)) = 0.$$ The probability distribution of$T(X_1, X_2)$is $$f_{\mu_1 + \mu_2, 2\sigma^2}(x) = (4\pi\sigma^2)^{\frac{1}{2}} \mathrm{exp} \left( – \frac{(x – \mu_1 – \mu_2)^2}{4 \sigma^2} \right).$$ Therefore $$0 = E_{\theta}(g(T(X))) = (4\pi\sigma^2)^{\frac{1}{2}} \int g(x_1 + x_2) \mathrm{exp} \left(-\frac{(x_1 + x_2 – \mu_1 – \mu_2)^2}{4 \sigma^2}\right) dx_1 dx_2.$$ The$\mathrm{exp}$term is always greater than$0$, therefore$g(x) = 0$almost everywhere. Therefore$P_{\theta}(g(x) = 0) = 1$. ## Lemma 2 Given independent samples$X_1 \sim N(\mu_1, \sigma^2)$,$X_2 \sim N(\mu_2, \sigma^2)$, …,$X_n \sim N(\mu_n, \sigma^2)$, then$\sum_i X_i$and$\sum_i X_i^2$are complete. ### Proof Both statements follow from induction on$i$and applying Lemma 1. ## Proof of the main result Let$X_{ij}$as defined in the main result. Define$X = (X_{ij})$to be the joint distribution of$\{X_{ij}\}$with probability distribution $$\begin{split} f(x) &= f_{\mu_1, \mu_2, …, \mu_n, \sigma^2}(x) \\ &= (2\pi)^{-\frac{m}{2}} \sigma^{-m} \mathrm{exp} \left(-\frac{1}{2\sigma^2} \sum_{i,j} (x_{ij} – \mu_i)^2 \right) \\ &= (2\pi)^{-\frac{m}{2}} \sigma^{-m} \mathrm{exp} \left(-\frac{1}{2\sigma^2} \sum_i m_i \mu_i^2\right) \\ &\qquad \times \mathrm{exp} \left(-\frac{1}{2\sigma^2} \sum_{i,j} x_{ij}^2\right) \mathrm{exp} \left(\sum_i \frac{\mu_i}{\sigma^2} \sum_j x_{ij}\right). \end{split}$$ Notice that the equation above satisfies the requirements of the Fisher-Neyman factorization for $$T(X) = (\sum_{j} X_{1j}, \sum_{j} X_{2j}, …, \sum_{j} X_{nj}, \sum_{i,j} X_{ij}^2).$$ Therefore$T(X)$is sufficient statistic for$(\mu_1, \mu_2, …, \mu_n, \sigma^2)$. From Lemma 1 and 2 it follows that$T(X)$is a complete statistic. Define statistic$S$from$T$as $$\begin{split} S(X) &= \frac{1}{m – n} \left( \sum_{i,j} X_{i,j}^2 – \sum_i \frac{1}{m_i} \sum_j X_{i,j} \right) \\ &= \frac{1}{m – n} \sum_i \left( \sum_j X_{i,j}^2 – \overline{X}_i^2 \right) \\ &= \frac{1}{m – n} \sum_i \left( \sum_j (X_{i,j} – \mu_i)^2 – m_i (\overline{X}_i – \mu_i)^2 \right). \end{split}$$ With Lemma 2$S$is complete and sufficient. Moreover $$\begin{split} E(S) &= \frac{1}{m – n} \sum_i \left( \sum_j E((X_{i,j} – \mu_i)^2) – m_i E((\overline{X}_i – \mu_i)^2) \right) \\ &= \frac{1}{m – n} \sum_i \left( \sum_j \sigma^2 – m_i \frac{\sigma^2}{m_i} \right) \\ &= \frac{1}{m – n} \sum_i (m_i – 1) \sigma^2 = \sigma^2, \end{split}$$ because $$\begin{split} E((X_{i,j} – \mu_i)^2) &= \mathrm{Var}(X_{i,j}) = \sigma^2, \\ E((\overline{X}_{i,j} – \mu_i)^2) &= \mathrm{Var}(\overline{X}_{i,j}) \\ &= \frac{1}{m_i^2} \left(\sum_j \mathrm{Var}(X_{i,j}) + \sum_{j \neq k} \mathrm{Cov}(X_{i,j}, X_{i,k}) \right) \\ &= \frac{1}{m_i^2} \sum_j \sigma^2 = \frac{\sigma^2}{m_i}. \end{split}$$ Therefore$S$is a complete, sufficient, statistic estimator for$\sigma^2$. From the Lehmann-Scheffe Theorem it follows that$S$is a UMVUE for$\sigma^2$. QED # How I use Tmux Last years I’ve been using Tmux extensively. Tmux is a terminal multiplexer, i.e. Tmux enables you to create, access, and control multiple terminal from a single screen. Additionally Tmux can be detached from a screen and continue running in the background. Therefore the Tmux session can be reattached later in time, and not in particular in the same screen. For example you could run a Tmux session on a server, detach the session, logout in the evening, sleep, login the next morning, and reattach the same session as you detached the previous day. Because I use Tmux on many different systems: laptops, home pcs, servers, IoTs, I rarely change anything to the default Tmux settings. Hence I know that all Tmux multiplexers have the same key bindings and commands. To my surprise 98% of the time I use less than 20 commands of Tmux. I took some time to summarize these Tmux commands. This list is by no means complete, nor is it meant to serve as a cheat sheet. # Sessions Sessions are used for separating different projects. To handle sessions outside Tmux. • tmux new -s session-name, creates a new session named session-name. • tmux list-sessionsor tmux ls, show the list of existing sessions. • tmux attach -t session-name, opens an existing session named session-name. Without the parameter -t session-name, Tmux will select the first session. Inside Tmux sessions can be managed as well. • C+b s, shows the list of all existing sessions. This list can be used to switch sessions. • C+b :new -s session-name, creates a new session. This session can be renamed with C+b$.
• C+b d, detaches the session. And you’ll end up in the original terminal.

# Windows

Inside each session you can create windows.

• C+b c, creates a new window.
• C+b w, list all existing windows in the session.
• C+b n and C+b p, will move to the next and previous window respectively.
• C+b 0-9, moves to the window with the given id-number.
• C+b &, kills the current window. The window is also killed if all panes are killed.

# Panes

Inside each window you can create panes.

• C+b ", split pane horizontally.
• C+b %, split pane vertically.
• C+b o, switch to next pane. You can also use C+b <arrow> to move to another pane. But usually I don’t have more than three panes open in one window. Therefore I rarely use the arrows to switch panes.
• C+b+<arrow>, increases or decreases the size of a pane. If you hold down the arrow key the pane continues to increase or decrease.
• C+b x, close current pane. You can also close the pane by killing the terminal inside the pane.

# Copy mode

I’m not using copy mode that often. Usually I pipe output of commands with tee to a text file instead. But if I use it the most used commands are:

• C+b [, enter copy mode. You can move around the pane with the arrow keys. I usually use the arrow keys to read back older messages.
• C+<space>, will start selection. Hit Alt+w when you’re done selecting the text. The selection is moved to the copy buffer. With C+b ]the buffer is copied in to the current cursor position.
• tmux list-buffers or C+b :list-buffers, show the list of buffers stored.
• tmux show-buffer -n buffer_n foo.txt or C+b :show-buffer -n buffer_n foo.txt, outputs buffer_n to foo.txt.
• q exists copy mode.

# Create swap space on disk

My webserver doesn’t have a swap section installed by default. This is not a big deal as long as you don’t require a lot of memory. Running this website doesn’t require more than 300mb of ram. But sometimes I would like to run other tools or programs on this server as well that do require more memory than I have available.

I found this nice method to create a swap space on my disk. In the following example we create a swap space of 2GB. Create an empty file of exactly 2GB. Then set up the empty file as a swap space.

sudo dd if=/dev/zero of=/var/swap.img bs=2048k count=1000
sudo chmod 600 /var/swap.img
sudo mkswap /var/swap.img

Enable or disable the swap space can be done in the terminal. To enable the swap space run sudo swapon /var/swap.img. To disable the swap space run sudo swapoff /var/swap.img.

# Invert colors in Zathura (a.k.a. night mode)

Zathura is a highly customized, functional, fast pdf reader, focused on vim-like keyboard interactions. I use Zathura for reading pdf, postscript, and djvu, and Zathura is one of my main tools on every Linux distribution I use. One of the things I like about Zathura is inverting colors. This saves my eyes during the night, therefore I like to call it night mode. Inverting colors in Zathura is called recoloring, and is binded to Ctrl+r. This took some time before I found out. I like to key bind inverting colors to Ctrl+i, in ~/.config/zathura/zathurarc add:

map <C-i> recolor

This will do the trick.

# The product of two uniform distributions

Recently someone asked me the following question, what is the distribution of the product of two uniform distributions? Not knowing the answer immediately we discussed this problem and concluded that this is not a trivial question. Below I show you my solution to this problem.

## Numerical approach

We first want to have some intuition about what we expect to obtain. We do this by generating 10000 pairs of random uniform samples between 0 and 1. Then we multiply each of the pairs and plot the histogram of the results. We can do this with the following piece of python code.

import numpy as np
import matplotlib.pyplot as plt

rs = np.random.RandomState(37)

X = rs.rand(10000)
Y = rs.rand(10000)
Z = X*Y

hist = np.hstack(Z)
plt.hist(hist, bins = 'auto')
plt.show()

The results are quite surprising It seems that the product of two uniform distributions are distributed on a logarithmic scale. Now the question is, can we find the explicit distribution function, and can we prove the correctness of the distribution function.

## The distribution function of the product of two uniform distributions

Let $X$ and $Y$ be uniform distributions between $0$ and $1$. Define $Z = XY$. We write $f_X$ and $f_Y$ for the distributions $X$ and $Y$, respectively. Let $0 \leq T \leq 1$, then
$$\begin{split} P(Z < T) &= \int_0^1 P(xY < T) f_X(x) dx = \int_0^1 P(Y < T/x) f_X(x) dx \end{split}$$
If $x < T$, then $P(Y < T/x) = 1$, hence we split the integral
$$\begin{split} P(Z < T) &= \int_0^T f_X(x) dx + \int_T^1 P(Y < T/x) f_X(x) dx \\ &= \int_0^T f_X(x) dx + \int_T^1 \frac{T}{x} f_X(x) dx \\ &= \int_0^T 1 dx + \int_T^1 \frac{T}{x} dx \\ &= T – T \log(T). \end{split}$$
Hence
$$f_Z(x) = \frac{d}{dx} (x – x\log(x)) = 1 – (1 + \log(x)) = -\log(x).$$

## The distribution function of the product of an arbitrary product of uniform distributions

We can do better. Let $X_1$, $X_2$, …, $X_n$ be uniform distributions. What is the distribution function for $Z_n = X_1 X_2 \ldots X_n$? We will show, in general, that
$$f_{Z_n}(x) = \frac{(-1)^{n-1}}{(n-1)!} \log(x)^{n-1}, \quad 0 \leq x \leq 1.$$
For $n = 2$ we already showed that $f_{Z_2}(x) = -\log(x)$. Assume that we can show this formula for all $k \leq n$. We show that the formula holds for $n+1$. By induction this proves that the formula holds for all $n$. We use the somewhat known identities, see https://en.wikipedia.org/wiki/List_of_integrals_of_logarithmic_functions,
$$\begin{split} \int \log(x)^n dx &= x \sum_{k=0}^n (-1)^{n-k} \frac{n!}{k!} (\log(x))^k, \\ \int \frac{\log(x)^n}{x} dx &= \frac{\log(x)^{n+1}}{n+1}. \end{split}$$
Using
$$\begin{split} P(Z_n < T) &= \int_0^1 P(x X_n < T) f_{Z_{n-1}}(x) dx \\ &= \int_0^1 P(X_n < T/x) f_{Z_{n-1}}(x) dx \\ &= \int_0^T f_{Z_{n-1}}(x) dx + \int_T^1 \frac{T}{x} f_{Z_{n-1}}(x) dx, \end{split}$$
where $Z_{n-1} = X_1 X_2 \ldots X_{n-1}$, we have
$$\begin{split} P(Z_{n+1} < T) &= \int_0^T f_{Z_{n}}(x) dx + \int_T^1 \frac{T}{x} f_{Z_{n}}(x) dx \\ &= \int_0^T \frac{(-1)^{n-1}}{(n-1)!} \log(x)^{n-1}dx + \int_T^1 \frac{T}{x} \frac{(-1)^{n-1}}{(n-1)!} \log(x)^{n-1}dx \\ &= \frac{(-1)^{n-1}}{(n-1)!} \left( \int_0^T \log(x)^{n-1}dx + \int_T^1 \frac{T}{x} \log(x)^{n-1} dx \right) \\ &= \frac{(-1)^{n-1}}{(n-1)!} \left( T \sum_{k=0}^{n-1} (-1)^{n-k} \frac{n!}{k!} (\log(T))^k – \frac{T}{n} \log(T)^n \right) \end{split}$$
The differential of $P(Z_{n+1} < T)$ is a telescope sum, i.e. using
\begin{equation*}
\frac{d}{dx} x\log(x)^k = k\log(x)^{k-1} + \log(x)^k,
\end{equation*}
we have
$$\begin{split} \frac{d}{dx} \left( x \sum_{k = 0}^{n} (-1)^{n-k} \frac{n!}{k!} \log(x)^k \right) &= \sum_{k = 0}^{n} (-1)^{n-k} \frac{n!}{k!} (k\log(x)^{k-1} + \log(x)^k) \\ &= \sum_{k = 0}^{n-1} (-1)^{n-k+1} \frac{n!}{k!} \log(x)^k + \sum_{k = 0}^{n} (-1)^{n-k} \frac{n!}{k!} \log(x)^k \\ &= \log(x)^{n} \end{split}$$
Therefore
$$\begin{split} f_{Z_{n+1}}(x) &= \frac{d}{dx} P(Z_{n+1} < x) \\ &= \frac{(-1)^{n-1}}{(n-1)!} \left( \log(x)^{n-1} – \log(x)^{n-1} – \frac{1}{n} \log(x)^{n} \right) = \frac{(-1)^n}{n!} \log(x)^n. \end{split}$$
Hence by induction we showed that $f_{Z_n}(x)$ holds for all $n$.

# Encrypt your drive with cryptsetup in linux

I prefer to encrypt all my external hard drives and usb sticks. One way to do this is with cryptsetup. Below I added instructions how to encrypt your drive. Suppose you have a partition on your drive at /dev/sdb1.

1. Install cryptosetup, in ubuntu run sudo apt install cryptsetup.
2. Encrypt the partition:
sudo cryptsetup -v --verify-passphrase luksFormat /dev/sdb1
3. Open then encrypted partition:
sudo cryptsetup luksOpen /dev/sdb1 crypted-partition

The partition can be found in /dev/mapper/encrypted-partition.

4. Create your favorite filesystem on the the encrypted partition. I install the ext4 filesystem.
sudo mkfs.ext4 -L label-name /dev/mapper/encrypted-partition
5. Close the encrypted partition:
sudo cryptsetup luksClose /dev/mapper/encrypted-partition

Done!