Saturday, April 29, 2017

OpenSSL on Linux Terminal for Cryptography

OpenSSL is the most important library we will be using if we are to use some cryptographic functions on a Linux computer. There are hundreds of variations of cryptographic algorithms and related functions are available on Linux terminal with OpenSSL. In this post, I would like to note down few useful things we can do with OpenSSL library on Linux.

Base64 Encoding:

When dealing with various files and data types, sometimes it is useful to convert everything into text format. Base64 is an encoding technique which we can use to convert different data into text format and then decode from text format to the original format. Let's see how we can do it.

First of all, let's select an image file in jpg format. We are going to encode this image file in base64 format as follows.

cat old-house.jpg | openssl enc -base64 >> encoded-image.jpg

Now, it you try to open the encoded image file, you will see that you cannot open in as a usual image file. However, the nice thing is, you can open the encoded image file using a text editor because the content is printable characters now. Let's decode the encoded image file back to the original format.

cat encoded-image.jpg | openssl enc -base64 -d >> decoded-image.jpg

Data Encryption Standard (DES):

DES is a little bit older data encryption algorithm which we don't have to use nowadays. Anyway, with OpenSSL library, we can use DES as follows to encrypt data.

openssl enc -des -a -in plantext.txt -out ciphertext.des

Once encrypted, we can use the following command to decrypt the data back to the plain text.

openssl enc -des -d -a -in ciphertext.des -out plaintext.tex

Advanced Encryption Standard (AES):

AES is a better and stronger encryption algorithm compared to DES. We can encrypt a file using AES and then decrypt the file back to the plain text using the following two commands.

openssl aes-256-cbc -in plaintext.txt -out ciphertext.aes

openssl aes-256-cbc -d -in ciphertext.aes -out plaintext.txt

Rivest, Shamir, Adleman (RSA):

When we are browsing web on our computer or mobile phone through a secure channel, the encryption algorithm which helps us most importantly is RSA. It is an asymmetric key encryption algorithm where we have two keys called as Public key and Private key. Private key is what we always keep to ourselves while the Public key can be distributed among everybody.

First of all, we need to generate a key pair in order to use RSA. Let's generate the private key and public key using the following two commands.

openssl genrsa -out private_key.pem 1024

openssl rsa -in private_key.pem -out public_key.pem -outform PEM -pubout

If you check the local directory where you were when running the above commands, now you should be able to see that two new files are created as private_key.pem and public_key.pem which are our keys. Now, we can encrypt data using RSA. Let's say we are going to encrypt a text file so that only a friend can see it. Then, we have to use the friend's public key during the encryption as follows.

openssl rsautl -encrypt -inkey public_key.pem -pubin -in plaintext.txt -out ciphertext.txt

Now, when your friend decrypt the cipher text, he or she should use his/her private key as follows.

openssl rsautl -decrypt -inkey private_key.pem -in ciphertext.txt -out plaintext.txt

Digital Signature of a Message:

When sending an encrypted message to someone, sometimes we need to prove that we are the person who sent the message. Just because we use RSA encryption as above, we cannot prove that we sent it. In order to prove the senders identity, we can use digital signatures. A digital signature is nothing other than just a Hash value of the original message encrypted using the private key of the sender. Since the private key of someone is always with that perform, a digital signature can be produced only by the owner of the sender's private key. We can digitally sign as follows using the RSA private key.

openssl dgst -sha256 -sign private_key.pem -out signature.txt plaintext.txt

Now, at the recipients side, he or she can verify the senders signature by decrypting the signature file using the senders public key (which is available) and then recalculate the Hash value of the original message. Following command illustrates that functionality.

openssl dgst -sha256 -verify public_key.pem -signature signature.txt plaintext.txt

There are so many things we can do with the OpenSSL library. It is up to you to explore it further.

Monday, April 24, 2017

Using Autopsy Tool for Forensic Disk Analysis

Autopsy is a web based GUI interface for the Sleuthkit forensic investigation tool kit. It can be used for interesting forensic analysis works on images acquired from storage devices. In this blog post, I'm writing the basics steps to install and use Autopsy tool to analyze a disk image.

Since the usage of a real disk image taken from somebody's storage device raises privacy concerns, I'm preparing custom disk images by manually creating filesystems on binary files. So, first of all, let's create our testing disk image.

Creating a custom disk image:

(1) Creating a file of 500MB. 

  dd if=/dev/zero of=./mydisk.img bs=1M count=500

(2) Format the file with the FAT32 file system.

    sudo mkfs.vfat -F 32 mydisk.img

(3) Mount the filesystem.

    mkdir mount-point
  sudo mount mydisk.img ./mount-point/


(4) Create a text file and add some sample content.

    sudo touch mount-point/readme.txt
  sudo vim mount-point/readme.txt


    "This is just a text file for testing purposes."

(5) Now, delete the file you created.

    sudo rm mount-point/readme.txt

(6) Unmount the filesystem.

    sudo umount mount-point/

Using Autopsy tool for disk analysis:

It's time to analyse the disk image using Autopsy tool which is the GUI frontend for the Sleuthkit.

(1) Install Autopsy tool together with Sleuthkit on a Linux machine.

    sudo apt-get update
  sudo apt-get install autopsy


(2) Start Autopsy with root previleges.

    sudo autopsy

(3) Now, we can access the web interface using the following URL.

    http://localhost:9999/autopsy

(4) Create a new case, a new host and finally give path to the above disk image. Once you reach the end of the creation of everything, you can see a button called "Analyze" in order to analyze the disk image.

(5) In this interface, click on the button for "File Analysis". Then you can see the files of the disk image. We can see our deleted text file in red color.



(6) In the above screen, you can see that there is a column called "META". Click the item of the deleted "readme.txt" file under this "META" column. Now you will see some meta data of the file.





(7) In the above screen, note that the file size is 47 bytes. Since the sector size of this disk image is 512 bytes, this file just resides in a single sector. That is the sector shown as "2038" in the above screenshot. Click on that sector number to view it.


 (8) On this new screen, we can view the content of the file in different formats such as ASCII and Hex.


(9) Click on the "Export Contents" button to export the deleted file and save it somewhere in your local storage.
It just saves as a raw file without the proper file extesion.

(10) We can check the file type of the exported file using various ways.

    file vol1-Sector2038.raw

(11) Let's rename the file to the correct file extension type.

    mv vol1-Sector2038.raw textfile.txt

(12) Finally, take a look at the file contents to see that is the file we created.

    cat textfile.txt

There are so many features in Autopsy tool which we can explore.

References:


[1] https://www.sleuthkit.org/autopsy/

[2] https://digital-forensics.sans.org/blog/2009/05/11/a-step-by-step-introduction-to-using-the-autopsy-forensic-browser
 

Saturday, April 22, 2017

Taking a Linux RAM Image

For various purposes such as forensic investigatins and debugging of Linux systems, we need to have a RAM image taken from a running Linux system. While there are various ways to do it, I explored an easy and interesting way using a special kernel module for Linux called LiME. I will explain the steps one by one.

(1) Download LiME

https://github.com/504ensicslabs/lime

(2) Go into the downloaded directory and compile the kernel module.

make

(3) Load the kernel module and save RAM dump to a file in one line.

sudo insmod lime-4.4.0-70-generic.ko "path=/home/asanka/Desktop/asanka-ram/mem.lime format=lime"

(4) If we want to take another RAM dump, first we have to unload the kernel module.

lsmod | grep lime
rmmod lime

 
(5) Now let's capture again but this time we use the 'raw' format.

sudo insmod lime-4.4.0-70-generic.ko "path=/home/asanka/Desktop/asanka-ram/mem.raw format=raw" 

(6) Analysis of the captured RAM image is a seperate topic. However, we can perform the most basic things with this RAM image as a start.

strings mem.raw | less
strings mem.raw | grep "key word"


References:

[1] http://forensicswiki.org/wiki/Tools:Memory_Imaging#Linux


Inspecting File Metadata using exiftool

Inspecting metadata of a file is a way to reveal so many interesting things about it. Specially, image files which are taken as pictures from cameras contains wonderful amount of information. There's a nice tool called exiftool which we can use for this purpose. I'll briefly write about how we can install it and use to inspect some files.

(1) In order to install exiftool, you can either use the package repositories or just download the source code of exiftool and build locally.

#To install using package repositories, issue to following command in the terminal.

sudo apt-get update
sudo apt-get install libimage-exiftool-perl 

#If you want to build exiftool from the source code, first you should download the code from the following place.


#Extract the downloaded compressed file and move into it.

tar xvzf Image-ExifTool-10.49.tar.gz Image-ExifTool-10.49/ 
cd Image-ExifTool-10.49/

#Compile and install the tool using the following commands.

perl Makefile.PL
make test
sudo make install

(2) It's time to inspect a file. Let's take a picture from a camera and then inspect it as follows.

exiftool file-name.jpg

The screen shot shown above illustrates the output of the exiftool ran against an image file.

Thursday, January 26, 2017

Using PDFTK to Process PDF Documents

When dealing with PDF documents, we come across different requirements such as splitting a document in to parts and sometimes merging multiple documents together. I recently came across a great tool which we can use to do various tasks with PDF documents. I decided to list down few things I can do using pdftk tool and leave a link to a better resource.

(1) Install pdftk tool.

sudo apt-get update

sudo apt-get install pdftk


(2) Suppose I want to create a new pdf file extracted from the contents of the page numbers 31 to 37 in a larger pdf file. We can do that as follows.

pdftk input.pdf cat 31-37 output output.pdf

(3) Merging two documents one at the end of another.

pdftk input-file1.pdf input-file2.pdf cat output new-document.pdf

(4) Selecting few pages from multiple documents and putting them together into a single document.

pdftk A=input-file1.pdf B=input-file2.pdf cat A110-117 B2-3 output new-document.pdf

That's it!

References


Wednesday, January 25, 2017

Independent Component Analysis (ICA)

When we want to separate two signals which are mixed up, one interesting method we can use is Independent Component Analysis (ICA).  I think I'm not knowledgeable enough to explain how this whole thing works but there are plenty of explanations about it in the web. Take a look at the references listed at the end for any further details. But, the purpose of this article is to put the codes I used recently for an ICA job so that I will not forget how to use it in the future.

In order to perform ICA on Python we need to install an important package first. Let's do it first.

sudo pip install --upgrade pip
pip install -U scikit-learn

Now, it's time to write the Python script. The following script is taking two wav files as input which contains two mixed signals in different ways. Then it generates another two new wav files which contains the separated signals.


"""
=====================================
Blind source separation using FastICA
=====================================

An example of estimating sources from noisy data.

:ref:`ICA` is used to estimate sources given noisy measurements.
Imagine 3 instruments playing simultaneously and 3 microphones
recording the mixed signals. ICA is used to recover the sources
ie. what is played by each instrument. Importantly, PCA fails
at recovering our `instruments` since the related signals reflect
non-Gaussian processes.

"""
print(__doc__)

import os
import wave
import pylab
import matplotlib

import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from scipy.io import wavfile

from sklearn.decomposition import FastICA, PCA

###############################################################################

matplotlib.rcParams['ps.useafm'] = True
matplotlib.rcParams['pdf.use14corefonts'] = True
matplotlib.rcParams['text.usetex'] = True

# read data from wav files
sample_rate1, samples1 = wavfile.read('100000010mix1.wav')
sample_rate2, samples2 = wavfile.read('100000010mix2.wav')

print 'sample_rate1', sample_rate1
print 'sample_rate2', sample_rate2

S = np.c_[samples1, samples2]

ica = FastICA(n_components=2)
S_ = ica.fit_transform(S)  # Reconstruct signals

print 'original signal=', S
print 'recovered signal=', S_
print 'extracted signal1', S_[:,0]
print 'extracted signal2', S_[:,1]

# write data to wav files
scaled1 = np.int16(S_[:,0]/np.max(np.abs(S_[:,0])) * 32767)
wavfile.write('extracted-signal-1.wav', sample_rate1, scaled1)

scaled2 = np.int16(S_[:,1]/np.max(np.abs(S_[:,1])) * 32767)
wavfile.write('extracted-signal-2.wav', sample_rate2, scaled2)

###############################################################################
# Plot results

pylab.figure(num=None, figsize=(10, 10))

pylab.subplot(411)
pylab.title('(received signal 1)')
pylab.xlabel('Time (s)')
pylab.ylabel('Sound amplitude')
pylab.plot(samples1)


pylab.subplot(412)
pylab.title('(received signal 2)')
pylab.xlabel('Time (s)')
pylab.ylabel('Sound amplitude')
pylab.plot(samples2)


pylab.subplot(413)
pylab.title('(extracted signal 1)')
pylab.xlabel('Time (s)')
pylab.ylabel('Sound amplitude')
pylab.plot(S_[:,0])

pylab.subplot(414)
pylab.title('(extracted signal 2)')
pylab.xlabel('Time (s)')
pylab.ylabel('Sound amplitude')
pylab.plot(S_[:,1])

pylab.subplots_adjust(hspace=.5)
pylab.savefig('extracted-data.pdf')
pylab.show()

Run this Python script with the two wav files in the same directory and you will get the output signals as wav files in addition to seeing waveform plots of all these signals.



Thursday, December 22, 2016

Listening to the Giants, Once Again!

It's been a long day :)
We have been working on a long term project to minimize human-elephant conflict in Sri Lanka by applying various technologies from our expertise in computer science and embedded systems. One such application we are working on is building a smart electric fence which can notify the maintainers and people protected by the fence about breakages of the fence wire at different places. The second application is a system which can locate elephants in the wild based on the infrasonic waves (low frequency sounds) they emit. In order to do that, we have designed and built low-cost infrasonic detectors by ourselves in the lab and have performed various experiments.

Meeting in the village.
In order to perform evaluations of the applications we have built, we have visited different places outside the lab environment and performed experiments with real-world conditions. Once I went with my lab colleagues to visit a domesticated elephant in order to record its sound about which you can read here. As an extension to all these experiments, about a week ago, we visited Udawalawa wildlife park and Hambegamuwa village which is situated in the edge of the wildlife park. There were two important goals of this visit. The first was to meet the villagers of Hambegamuwa and have a meeting to get their help on building our own electric fence in their premises. The second was to visit Udawalawa park and perform elephant infrasonic localization experiments. The journey was planned to be 2 days long since we had a lot of works to do.

Phantom-4 flying over the electric fence.
(video footage from the drone camera is shown
at the end of this article.)

On the journey day early morning, me and Namal came to lab and loaded all the equipment to the van. Then we went to Dr. Deepani's home to pick her. She is an ecologist who has been working on elephant conservation in Udawalawa area for many years. She is the one who mediate our contacts with the villagers and wildlife department officials. After she joined with us to the journey, we directly went to Dr. Kasun's home in Ambalangoda to pick him and Chathura. Dr. Kasun's home offered us our breakfast since we came from Colombo early morning without stopping to have breakfast. Finally, we all got together and set sail to Udawalawa.

Rain and the darkness failed to stop our guys.
When we arrive to Hambegamuwa town, it was evening. We directly went to the small guest house which we had selected to stay. We left our bags there and directly went to the village in Hambegamuwa for the meeting with the villagers which is the first target of our journey. The village is located at the edge of the Udawalwa wildlife park. Therefore, they are continuously getting hit by crop raiding elephants. The people in this small village are growing crops in the surrounding area and they live inside small houses under the threat of roaming elephants. To protect their village, they have an electric fence surrounding an area of approximately 20 acres. Their fence just like all the other electric fences breaks down frequently mostly because of the elephants. When the fence is broken, the villagers have a hard time locating the breakage by walking along the fence wire.

Preparing infrasonic detectors for the experiment.
In this small village, we had a brief meeting with the people and we agreed to donate them an electric fence energizer in order to build a new electric fence for them. They will provide the labor force and other materials such as poles and wires to build the fence while we are providing the most expensive electric fence energizer. We received this energizer as a donation from the students who did Google Summer of Code (GSoC) with our SCoRe lab. As a community service, we are going to donate that energizer to the villagers. In return, we get the chance to use this fence as an experimental testbed to try our electric fence breakage detection system which we developed as a result of a research in the lab.

Deploying a pair of infrasonic detectors.
After the brief meeting, we went to the location of the fence around their village and tried the fence energizer we had bought for them. We noticed that the voltage of the pulses in their existing fence is significantly higher than the voltage of the pulses provided by our energizer. Their energizer is a locally built one while ours is a branded energizer from an international manufacturer according to the standards of electric fencing. Even the villagers accepted that their current energizer is too dangerous to elephants as it can kill an elephant due to the higher voltage instead of driving it away. Therefore, use of an energizer built up to the standards is necessary for them. There was a small rain fall by the time we were inspecting the fence which was a disturbance as we we were dealing with high-voltage. When the darkness came, it was challenging to test our breakage detection system but we kept working under the torch lights. When we return to the guest house in Hambegamuwa town, it was about 9.00pm. The dinner was served to us at the house of the owner of the guest house. His house was located next to the guest house and he lived there with his family. After the dinner, we all went to sleep as we all were tired.

Calibrating the angle of an infrasonic detector.
The next morning, we woke up to start another long day as our plan was to test our elephant infrasonic localization system in the field. We have built low-cost infrasonic microphones and a set of firmware running on an embedded system which can be used to locate elephants from long distances. We have performed experiments on them at different places but our objective of this journey was to use it for real wild elephants. The owner of the guest house offered our breakfast again at his house before we start the journey. Namal was so busy from the morning as he had to prepare the microphones properly for the experiments. He even worked while we were traveling in the van from the guest house to the Udawalawa wildlife park entrance area.

Giants have been here...
After arriving to the Udawalawa wildlife park area, we parked the van near a temple and set up our microphones in the vehicle park as a start. We stayed there recording data for few hours while Dr. Kasun and Dr. Deepani went to have a meeting with the Udawalawa wildlife park warden. After the meeting, he returned with the good news that we can carry our quad-copter to their place and fly it around to take some pictures. Therefore, Chathura went with them in the van since he is our quad-copter specialist and the pilot (read more about our drone adventures here). Meanwhile, we decided that recording infrasonic data from the vehicle park is not a very successful approach as it receives a lot of noise from the vehicles going in the nearby road. Therefore, we decided to carry our two pairs of infrasonic detectors into the jungle.

Our hiding place where we spent the whole evening.
In order to get closer to the Udawalawa reservoir, we hand to crawl under an electric fence which was difficult with the equipment we were carrying. After getting closer to the Udawalawa reservoir, we found a nice hiding place where we can perform the sound recordings. We placed our infrasonic detectors on the ground and calibrated their settings to capture data continuously. Then, we stayed there with a pair of binoculars to note the sightings of elephants and the location based on the visual observation. Our hope was to compare the results of the captured data using the infrasonic detectors against our visual observation. An old fellow who lived in the temple where we parked our van came with us to this place and helped us in various ways for a while and finally went back to the temple. From that point onwards, only Namal and me was left near the reservoir.

Elephants in the distance as seen from the binoculars.
When it was lunch time, Chathura brought two lunch packets and a water bottle to us and went back in the van to the place where they were flying the quad-copter. Namal and me spent the whole evening at this location near the reservoir and had lunch at about 3.30pm while watching elephant herds coming and going to the Udawalawa reservoir. When we finish our lunch there was no any elephant near the reservoir and therefore we both decided to walk to the water and come back immediately if we notice any danger. By the way, while the microphones are recording data, we had nothing else to do there other than watching elephants using the binoculars. It was about 5.30pm when our guys returned to the location we were after flying the drone. Since it was late and many people warned us that elephants can arrive to the place we were hiding after the darkness falls, we decided to grab our stuff and move out of the jungle. We crawled again under the electric fence, loaded our equipment to the van parked near the temple and started moving. It was closer to 6.00pm and the time was right for the Udawalawa elephant orphanage to feed baby elephants with milk. So, we quickly moved in to that place to see it. Finally, we started our journey back to Colombo.




It was about 1.30am when I returned to home in Colombo and was so tired. However, the hope that our recorded data must contain some proof that our elephant localization system works in the real field kept spinning in my mind.

~********~