Thursday, March 25, 2021

Enabling Hibernation in Ubuntu 20.04 LTS using a Swap File

Enabling of the hibernation option on Ubuntu 20.04 LTS didn't work in the way I used to do. So, I had to explore further and do it slightly differently. This blog post records those things I did to get hibernation option working.

1. First of all, create a swap file using the following commands.

sudo fallocate -l 17G /swapfile

sudo chmod 600 /swapfile

sudo mkswap /swapfile

sudo swapon /swapfile


Once done, add an entry to the end of /etc/fstab file to make use of this swap file as follows.

/swapfile none swap sw 0 0

2. Check the UUID of the partition where the swap file is located using the following command.

cat /etc/fstab

Take note of the UUID string, which we will need in a later step.

3. Check the offset to the swapfile with in the storage device using either of the following commands. Take note of that offset value.

sudo filefrag -v /swapfile | awk '{ if($1=="0:"){print substr($4, 1, length($4)-2)} }'

sudo swap-offset /swapfile

4. Now, open the /etc/default/grub file and add update the correct line as follows.

Original line:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

Updated line:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash resume=UUID=a27fc21e-3315-4497-99aa-1fe7fad64091 resume_offset=9807872"

Note that the UUID value and the resume offset value are found using the above steps 2 and 3.

Once this grub file is updated, run the following command to take the changes effect.

sudo update-grub

5. Test whether the hibernation option is working now, use either of the following commands. I personally prefer the second command as it provides some verbose output while the system is being hibernated and being resumed later.

sudo systemctl hibernate

sudo pm-hibernate

Cheers!


References:

  1. https://linuxize.com/post/create-a-linux-swap-file/
  2. https://askubuntu.com/questions/1240123/how-to-enable-hibernate-option-in-ubuntu-20-04
  3. https://wiki.archlinux.org/index.php/Power_management/Suspend_and_hibernate#Hibernation_into_swap_file


Tuesday, April 21, 2020

Encrypting Files Using GnuPG

This post shows how to use GnuPG to encrypt and decrypt files on a Linux environment.

1. If you haven't created your GnuPG key pair yet, you can use the following commands to create them and view their details.

Create a pair of GnuPG keys using the following command.

gpg --gen-key

The keys and their relevant information are stored in .gnupg directory under your home directory. You can view the public keys in your keyring using the following command.

gpg --list-key

You can view the private keys using the following command.

gpg --list-secret-keys

2. Encrypting a file called "private-file.txt" can be done as follows. We can either specify a new name for the encrypted file or GnuPG will automatically name the new file by appending .gpg extension to the name of the plaintext file.

gpg --encrypt --recipient your.email@gdomain.com private-file.txt

gpg --output encrypted.gpg --encrypt --recipient your.email@gdomain.com private-file.txt

3. Decrypting a file called "private-file.txt.gpg" can be done as follows. Similar to the previous case, we can either specify a name for the decrypted file or leave it to the default.

gpg --output private-file.txt --decrypt private-file.txt.gpg

gpg --decrypt encrypted.gpg > private-file.txt

4. Encrypting all the files in a directory can be done as follows.

gpg --encrypt-files --recipient your.email@gdomain.com /path/to/the/directory/*

5. Decrypting all the .gpg files in a particular directory can be done as follows.

gpg --decrypt-files /path/to/the/directory/*.gpg

Resources: 

1. https://blog.ghostinthemachines.com/2015/03/01/how-to-use-gpg-command-line/

2. https://www.gnupg.org/gph/en/manual.pdf

~*************~

Friday, April 17, 2020

Sending Secure Emails with OpenPGP

Use of encryption in our electronic communication is essential to protect our security and privacy. Here's how we can use OpenPGP standard to send and receive emails securly. While there are many software tools to get this done, I prefer this way.

1. Create a pair of GNU Pritty Good Privacy (PGP) keys using the following command.

gpg --gen-key

The keys and their relevant information are stored in .gnupg directory under your home directory. You can view the public keys in your keyring using the following command.

gpg --list-key

You can view the private keys using the following command.

gpg --list-secret-keys

2. Log-in to your email account from Thunderbird email client. Thunderbird is available by default in most Linux systems including Ubuntu Linux.

3. Install the Enigmail plug-in in Thunderbird. Since we have already created the GPG keys, Enigmail will automatically detect them and start using them. If we didn't have created the keys already, Enigmail facilitates creating them as well.

4. From the menu bar of Thunderbird, select the Enigmail item and then Key Management option, which will display your key. Right-click on your key and select the option "Upload Public Keys to Kerservers". This will post your public key to a public key server.

5. Now, we are good to go with sending and receiving encrypted emails. When you compose an email with Thunderbird, there is a padlock button that stands for encryption of the email. When you enable it and then hit send button, Enigmail will prompt you if the public key of the recipient is not available locally. In that case, it will also facilitate to obtain the required keys from keyservers as well.

References:

1. https://emailselfdefense.fsf.org/en/

2. https://blog.ghostinthemachines.com/2015/03/01/how-to-use-gpg-command-line/

~***********~

Tuesday, March 31, 2020

Setting up Hibernation on Ubuntu 18.04 LTS

The ability to hibernate the computer when we are done for the day and get back to where we left next time was a useful feature we had in Ubuntu sometime back by default. However, unfortunately, recent Ubuntu versions does not offer this feature off-the-shelf. Recently, I wanted to get this feature into my laptop running Ubuntu 18.04 version and following are the steps I followed.

1. Creating a swap file

My laptop has 8GB of RAM. Therefore, we need to have a swap space of at least the same size of RAM. Since I didn't want to allocate a partition partition, I created a swap file as follows. 

sudo fallocate -l 8G /swapfile2
sudo chmod 600 /swapfile2
sudo mkswap /swapfile2
sudo swapon /swapfile2


Append the following line to /etc/fstab file in your system.

/swapfile2 none swap sw 0 0

2. Enabling hibernation

Check the UUID of the device where swapfile is located using the following command. The UUID is a very long number that you can see in the output.

sudo findmnt -no SOURCE,UUID -T /swapfile2

Install the following tool.

sudo apt install uswsusp

Run the following command. When prompted, go ahead without a valid swap space by giving 'yes' as the response and then select the device partition where the swap file exists (don't select the swap file itself).

sudo dpkg-reconfigure -pmedium uswsusp

I'm not sure whether I ran the following command next. Probably I did.

sudo update-initramfs -u

3. Enabling the resume from hibernation at next boot

We need to update the /etc/default/grub file as follows.

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash resume=UUID=<swap uuid>"

The following is how mine looks like after the modification.

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash resume=UUID=cda0136e-ffd9-4a0c-8657-a6511517aa71"

4. Testing hibernation

Run the following command to hibernate your computer. When you turn the computer on next time, it should resume the execution from where you left it when you run the following command.

sudo pm-hibernate

References:

1. https://askubuntu.com/questions/6769/hibernate-and-resume-from-a-swap-file

2. https://askubuntu.com/questions/548015/ubuntu-14-04-sudo-pm-hibernate-doesnt-work

~******************~

Tuesday, January 29, 2019

Sending Samples to Python from GRC using ZMQ Sink

When we need to process some data generated by an SDR device, the most convenient approach we have currently is saving the data into a file and then reading the files from Python. However, if the requirement is to process data in real-time as they are generated from the SDR device, saving to a file is not the right way. GNURadio Companion software provides a special set of sink blocks which uses ZMQ messaging protocol for such purposes. This post demonstrates how to use one of such sink types in order to deliver raw samples generated by a GRC flow graph into a Python script.

(1) Create the following flow graph on GNURadio Companion. Instead of taking data from a real SDR device, we use two Signal Source blocks to generate two cosine wave signals with 3MHz and 5MHz frequencies. We set the sample rate to 4 MHz. The Throttle block is necessary to regulate the data flow through the flow graph since we are not using a real SDR hardware. Most importantly, we are using a ZMQ Push Sink block. Notice that we have given the localhost IP address and an arbitrarily selected port number as the destination of the data.



(2) Now, in order to capture the data, we need a Python script which implements a ZMQ Pull client. Create a Python program with the following content and save it as client.py.


import time
import zmq
import random
import numpy as np
import matplotlib.pyplot as plt

def consumer():
    consumer_id = random.randrange(1,10005)
    print("I am consumer #%s" % (consumer_id))
    context = zmq.Context()
    consumer_receiver = context.socket(zmq.PULL)
    consumer_receiver.connect("tcp://127.0.0.1:5557")
    
    while True:
        buff = consumer_receiver.recv()
        print(time.time())
        data = np.frombuffer(buff, dtype="float32")
        data = data[0::2] + 1j*data[1::2]
        print(type(data))
        print(len(data))
        #plt.figure()
        #plt.psd(data, NFFT=len(data), Fs=4e6, Fc=1e3)
        #plt.savefig("psd.png")
        #time.sleep(0.5)
        #exit()
        
consumer()

(3) We need to start the client Python program first from a terminal.

python client.py

(4) Now, start the GNURadio Companion flow graph. On the terminal where our Python program running, we should be able to see the chunks of data coming now. The number of samples contained in each data set various over time which I'm not exactly sure why. Following screenshot shows the output on the terminal.


(5) If we activate the commented lines, we can save a plot in a PNG file which shows the power spectral density (PSD) of the received signal. As expect, there are two peaks in both sides of the center frequency with a gap of 1 MHz. This is because our sampling rate was 4 MHz while there were two signals with 3MHz and 5MHz in the captured signal. Following figure shows that PSD graph.

The code for the Python script and the GRC flow graph file are kept in a Github repository to try it easily. Cheers!

Useful links:

[1] More reading on PyZMQ library.

[2] A question in StackExchange where somebody had suggested to use ZMQ Sink.

[3] The Github repository with my codes illustrating this work.




Sunday, January 6, 2019

Starting Viber with too large resolution

After installing a brand new Ubuntu 18.04 LTS on my computer, I made some adjustments to the font sizes of the operating system in order to make it better suit for my perception. However, I realized that something is wrong after installing Viber on my desktop. The GUI of the Viber application appears so huge that it falls off the resolution of the screen making it completely unusable. After searching on the web, the only solution I found was to start Viber using the terminal with an extra parameter called QT_SCALE_FACTOR. Following is the command which we should run on the terminal to start Viber with a usable scale.

(Note: it is important to set the environmental variable first before invoking the Viber executable as shown.)

QT_SCALE_FACTOR=0.6 /opt/viber/Viber

Reference:

Sunday, December 30, 2018

Truly Reproducible Research Papers

A slide from Prof. Barry Smyth's presentation
If you perform an experiment and get some interesting results which cannot be redone and get the same results by somebody else, something is wrong with your finding. This is called reproducibility of research. If it is not reproducible, it is not science. You might think that systematic research carried out by academics and professional scientists who publish papers in conferences and journals are doing reproducible research. Not really.

Majority of research papers I've come across in my own domain are bare descriptions and explanations of their results without proper support for reproduction of the results by anybody interested. Even though a paper with a good quality provides a lot of details of their experimental setups and settings, it difficult to truly recreate their results completely based on the details in the paper. It is often necessary to contact the authors and have a correspondence back and forth several times to get things clear. Similarly, if I ask myself whether I can reproduce a research work I had published few years ago solely based on the details I had put down on my own paper, I have to give a big 'No' unfortunately.

This is a bad way to do science.

It is unfair to computer scientist if I say they are not putting any effort to make their research reproducible. There are two important ways they try do it these days. The first is giving away data sets that they had collected. This allows third parties to verify their results and also to extend and build upon it. The second is to provide the source codes of the experimental implementations they have made. They usually put their codes into a Github repository and provide the link in their research papers so that readers can find the source code repository and reuse their code.

Another slide from Prof. Barry Smyth's presentation
Recently I attended to a talk delivered by Prof. Barry Smyth in UCD, Ireland where he suggested two interesting ways to make our research papers reproducible. The first is a practice which is much simpler and easier to do. That is to produce a Jupyter Notebooks along the scientific publication which has both software codes, data, descriptions and explanations in a well documented manner which a third party can quickly run and build upon. If you haven't used or read about Jupyter Notebooks, have a look at the first link in the references section. It's a way to produce well documented software codes where you have your software codes, their descriptions and their output in a report-like format.

There's even more powerful way of making reproducible research papers. Imagine you are producing a research paper where the paper talks about a 30% improvement in something. How to enable the reader to verify whether this number is truly 30% by using their own experimental data? If I'm giving away the source codes of my implementations, does the reader has sufficient information to locate the correct programs and execute them in the correct sequence in order to get the final result? This is where the tool "Kallysto" comes in. It is a tool developed by Prof. Barry in order to make scientific publications fully reproducible and traceable. Kallysto combines Latex with Jupyter Notebooks in such a way, your Latex manuscript is directly linked with the original data and the software codes which analyze them. While the typical workflow of writing a research paper is to (1) analyze data, (2) produce graphs as images or PDF files, and finally (3) create a Latex manuscript which explicitly include those graphs. When you compile your Latex source files, Kallysto will run the Jupyter Notebooks analyzing data and generates the results in real-time which will be used by Latex to produce the final PDF document.

The idea of Prof. Barry Smyth is to make scientific publications truly reproducible by scripting everything from the data to results and finally to latex documents.

References:

[1] Jupyter notebooks

[2] Netflix Papermil tool

[3] The tool made by Prof. Barry Smyth called Kallysto