Friday, December 14, 2012

Memories of ICTer 2012 Conference

Title slide of my presentation
Yesterday I attended to ICTer 2012 conference since one of the papers I co-authored is in the proceedings of the conference. Our paper title was TikiriPower – Using TikiriDB abstraction on Smart Home systems. This research was done by the SCoRe research group some time back. First author of the paper was Mr. Lakmal Weerawarne who worked with us for a considerable amount of time. However since now he has left for studying for his Ph.D, I was the person next in the line to conduct the presentation at ICTer 2012.

Our paper was scheduled to be in the Systems and Performance Evaluation session. Prof. Sudhir Dixit from HP labs India was the invited speaker of the session. Even though initially I was a little bit excited since that was the first time I was presenting a paper in this kind of a huge event, after few minutes I gained the confidence to move ahead in the latter part of the presentation. In the Q & A session after my presentation, Prof. Athula Ginige raised some questions but luckily I managed to provide him explanations.

In addition to presenting a paper, there were more benefits of attending the conference. Besides listening to other presentations, I tried my best to talk to other attendees of the conference as much as I could. Among other attendees, three people were so much friendlier to me. Dr. Gordon Hunter had visited with his Ph.D student Mr. Dilaksha Attanayake from Kingston University, London. Both of them were so nice people. Both of them along with few more co-authors had published a paper titled "A Novel Web-Based Tool to Enhance Learning of Mathematical Concepts" in ICTer 2012. So, they were here to present it. Another interesting person I met is  Dr. George Weir from University of Strathclyde, Glasgow. He didn't present any paper but he delivered a keynote speech and conducted an interesting workshop.

Altogether I'm sure the people I met, the experiences and knowledge I gained will leave a footprint in my memory for a long time.

UPDATE:

Mr. Dilaksha from Kingston university had taken some pictures at the conference during my presentation. He emailed me the pictures later.

Tuesday, December 11, 2012

Disable kernel messages and getty loggin through UART port in Raspberry Pi

When we are going to use the UART Rx and Tx pins of Raspberry Pi device for some purpose like communicating with an external device, we face a little problem. Linux operating systems send it's kernel messages in boot time to the serial port. Additionally it allows the user to login to the system though a serial console provided though the UART pins. Therefore when we use the UART pins for some other purpose we may get data belongs to the operating system.

After searching the web, the solution I found is to disable this Linux kernel messages coming to the UART port and also the serial console login facility. Here's how I did that.

To disable kernel messages coming to the UART port, open the following file by issuing the command,

sudo nano /boot/cmdline.txt

Edit the content of the file by removing all the parameters which involve ttyAMA0 device. So, the resulting content would look like the following.

dwc_otg.lpm_enable=0 rpitestmode=1 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait

Now save and exit from the nano editor.

To disable the serial console through the UART port, open the following file by issuing the command,

sudo nano /etc/innitab

Find the following line in that file.

T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

Now add a '#' character in front of that like to comment that out so that it looks as follows.

# T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

Now save and exit from the nano editor. After editing these two files we are done. When the next time you reboot the Raspberry Pi, you will be able to use the UART port without any disturbance from system level.


Thursday, December 6, 2012

Boot-Repair did the trick !!! :)


In my laptop I had Ubuntu 10.10 for a long time. Recently I wanted to use Ubuntu 12.04 LTS version and therefore I installed it in a separate partition in my machine. So, when I start the machine Grub shows up with the options to select either Ubuntu 10.10 or Ubuntu 12.04 to boot.

However after few days I lost my interest on the new interface in Ubuntu 12.04. Therefore I moved back to use my previously used Ubuntu 10.10 version. Since I didn't need Ubuntu 12.04 installation any further, I formatted that partition where Ubuntu 12.04 is installed. I did that while working on Ubuntu 10.10 version. However when the next time I start my computer I got a prompt which shows 'Grub Rescue>' text.

As I understand, when I format my Ubuntu 12.04 partition the Grub has been damaged. Because of this I couldn't even access my Ubuntu 10.10. At first I thought I would have to do so many configurations to fix this problem. However after searching the web for a while I finally found the solution.


There's a free tool called Boot-Repair which can fix frequent boot issues. This page (https://help.ubuntu.com/community/Boot-Repair) provides all the necessary information to use this tool which has very few steps. So, according to the instructions given I created a live USB stick with an Ubuntu image and then booted the laptop with it. When live Ubuntu USB is booted I went to the 'try ubuntu' option without installing it. After this live Ubuntu desktop is loaded, I installed Boot-Repair tool on this live version. Then I ran the Boot-Repair tool and had to just click a single button. When I restart the machine without using the live USB stick, the Grub showed up providing me the option to go to my ever loving Ubuntu 10.10 version.

That's it. Thanks to Boot-Repair tool I could solve the issue.

Monday, November 12, 2012

Started to use Google App Engine !

Today I did some readings about Google App Engine and started learning to develop applications for it. There are three languages supported in the Google App Engine which are Python, Java and Go. Since I'm a Python lover, I directly moved in to learn about what I can do with Python on Google App Engine.

There are lot of resources available in the web and therefore it is not a necessary to write about it. So I'm directly writing down what are the things I learned today by exploring Google App Engine. First of all I downloaded the Google App Engine SDK from here. Since I'm using Linux, I downloaded the version for the Linux platform which comes as a Zipped archive.

After uncompressing the Zipped folder, we can see different Python based tools available on this source code. So, now I'm writing a simple program which prints some text on the web browser.

1) Create a new directory in some where in the file system. Say we created it in desktop naming as my-application.

mkdir my-application

2. Now create a file named as app.yaml inside that directory and add the following content to it. Please note that application name is given as my-application. You have to give a unique name for this application.

 application: my-application  
 version: 1  
 runtime: python27  
 api_version: 1  
 threadsafe: yes  
 handlers:  
 - url: .*  
  script: helloworld.app  
 libraries:  
 - name: webapp2  
  version: "2.5.1"  

3. Now create another file named as helloworld.py which will contain the codes of our application. Add the following content to that file.

 #!/usr/bin/env python  
 import cgi  
 import datetime  
 import webapp2  
 class MainPage(webapp2.RequestHandler):  
  def get(self):  
           self.response.out.write('<html><body><h4>My First Google App Engine Application</h4>')  
           self.response.out.write('</body></html>')  
 app = webapp2.WSGIApplication([  
  ('/', MainPage)  
 ], debug=True)  

4. Now our source codes are ready. We have to test this app. Google App Engine SDK comes with a server which can be used to test our application before actually deploying the application on Google cloud. To use this server, go into the uncompressed Google App Engine SDK source folder. There's a tool named as dev_appserver.py. Run this python script giving the path to our new application as follows.

python dev_appserver.py /path/to/my-application

Then this server will start. Now open a web browser and go to the following URL.

http://localhost:8080/

The text "My First Google App Engine Application" will be printed on the browser. That means our application is ready.

5. Now we can try deploying our simple app on the Google cloud. To do that we have to go to the this URL (https://appengine.google.com/) and sign in. Then click on the "Create Application" button. As the application identifier you have to give a unique name for the application. This name should be defined as the application name in app.yaml file. For example we might use my-application as the name if nobody has used that name for an application on Google App Engine.

The give some Application title and then click on "Create Application" button. Now google is ready to host our application.

6. To upload our application codes to the Google cloud, we can use appcfg.py tool which also came with the Google App Engine SDK. We run it giving the path to our application as follows.

python appcfg.py update /path/to/my-application

After this tool completes running your application is gone to the Google cloud. You can see it from the appspot.com domain name with the application name in front of it. For example if the application name is my-application then you can access your app by typing the following URL on the web browsers address bar.

http://my-application.appspot.com/
  
If every thing is fine, you will see the text "My First Google App Engine Application" on the web browser again.

Friday, October 26, 2012

Unembedded Font Error in IEEE PDF eXpress Plus

Recently when I was going to submit a camera ready version of a paper to a conference I had to do it via IEEE PDF eXpress Plus. This website provides the verification facility for our papers to check whether they meet the correct formatting requirements defined in IEEE paper templates. What I had to do is submit a PDF version of my paper through my account and check whether my paper is OK according to the IEEE standards.

I was using Latex to prepare my paper with the help of IEEE conference paper templates. However, when I initially submitted the paper to the site it was rejected with an error report. The error report mentioned that some fonts in my paper are not embedded. These fonts are Times-Italic, Times-Roman, Times-BoldItalic, Times-Bold, Helvetica and Courier. I searched the web as usual to find a solution. According to some links I found in the web there are some commands to be issued when generating my PDF from the latex source files in addition to the other usual commands I use. 

So, I issued the commands as described in those web sites and then the resulting PDF was passed by IEEE PDF eXpress Plus without any error. Since there is a high probability for me to face this error in the future again, I thought to make it available here. The followings are the commands I issued to generate my PDF from latex source files.

    latex research_paper.tex

    bibtex
research_paper.aux

    latex
research_paper.tex

    latex
research_paper.tex

    dvips -Ppdf -G0 -tletter
research_paper.dvi

    ps2pdf -dCompatibilityLevel=1.4 -dEmbedAllFonts=true -dPDFSETTINGS=/prepress
research_paper.ps research_paper.pdf


Web Sources:

1. http://mohamednabeel.blogspot.com/2009/10/fixing-font-not-embedded-issue-to-pass.html

2. http://www.latex-community.org/forum/viewtopic.php?f=5&t=11296


Friday, October 19, 2012

Serial Line Internet Protocol (SLIP) implementation in Python

Sometime back I had a requirement to communicate with an external device which is connected to a USB port of my computer. The external device was a MSB430 sensor mote running contiki on it. My application running on the mote communicate with the computer using SLIP protocol. Therefore I needed to make my program on the PC to communicate using SLIP protocol over the USB port with the sensor mote.


Serial Line Internet Protocol (SLIP) is a very simple protocol which can be used to communicate between different devices over serial lines. It just encodes the users data byte streams before writing to the serial line and decode after reading from the serial line. I found a C language implementation of the SLIP protocol but I wanted a Python implementation. So first thing I did was searching through the Internet for a python based implementation of SLIP. But I couldn't find anything. Finally what I had to do is implement it on my own. Since there can be more people who need such a Python based implementation of SLIP protocol, I thought to put my code in the Internet.

My code consists of two source files. SLIP protocol encoding and decoding functions are defined in the ProtoSLIP.py file. Another file named as SerialComm.py wraps around those functions and provide some high-level functions which can be used by a user program to open a serial port, write data to it and read data from it using SLIP protocol. So, here we go.

Content of the ProtoSLIP.py file

1:  import termios  
2:  import serial  
3:  from collections import deque  
4:  SLIP_END = 0300          # declared in octal  
5:  SLIP_ESC = 0333    
6:  SLIP_ESC_END = 0334  
7:  SLIP_ESC_ESC = 0335  
8:  DEBUG_MAKER = 0015  
9:  MAX_MTU = 200  
10:  readBufferQueue = deque([])  
11:  #-------------------------------------------------------------------------------  
12:  # This function takes a byte list, encode it in SLIP protocol and return the encoded byte list  
13:  def encodeToSLIP(byteList):  
14:       tempSLIPBuffer = []  
15:       tempSLIPBuffer.append(SLIP_END)  
16:       for i in byteList:  
17:            if i == SLIP_END:  
18:                 tempSLIPBuffer.append(SLIP_ESC)  
19:                 tempSLIPBuffer.append(SLIP_ESC_END)  
20:            elif i == SLIP_ESC:  
21:                 tempSLIPBuffer.append(SLIP_ESC)  
22:                 tempSLIPBuffer.append(SLIP_ESC_ESC)  
23:            else:  
24:                 tempSLIPBuffer.append(i)  
25:       tempSLIPBuffer.append(SLIP_END)  
26:       return tempSLIPBuffer  
27:  #-------------------------------------------------------------------------------  
28:  #-------------------------------------------------------------------------------  
29:  # This function uses getSerialByte() function to get SLIP encoded bytes from the serial port and return a decoded byte list  
30:  def decodeFromSLIP(serialFD):  
31:       dataBuffer = []  
32:       while 1:  
33:            serialByte = getSerialByte(serialFD)  
34:            if serialByte is None:  
35:                 return -1  
36:            elif serialByte == SLIP_END:  
37:                 if len(dataBuffer) > 0:  
38:                      return dataBuffer  
39:            elif serialByte == SLIP_ESC:  
40:                 serialByte = getSerialByte(serialFD)  
41:                 if serialByte is None:  
42:                      return -1  
43:                 elif serialByte == SLIP_ESC_END:  
44:                      dataBuffer.append(SLIP_END)  
45:                 elif serialByte == SLIP_ESC_ESC:  
46:                      dataBuffer.append(SLIP_ESC)  
47:                 elif serialByte == DEBUG_MAKER:  
48:                      dataBuffer.append(DEBUG_MAKER)  
49:                 else:  
50:                      print("Protocol Error")  
51:            else:  
52:                 dataBuffer.append(serialByte)  
53:       return            
54:  #-------------------------------------------------------------------------------  
55:  #-------------------------------------------------------------------------------  
56:  # This function read byte chuncks from the serial port and return one byte at a time  
57:  def getSerialByte(serialFD):       
58:       if len(readBufferQueue) == 0:  
59:            #fetch a new data chunk from the serial port       
60:            i = 0  
61:            while len(readBufferQueue) < MAX_MTU:  
62:                 newByte = ord(serialFD.read())  
63:                 readBufferQueue.append(newByte)  
64:            newByte = readBufferQueue.popleft()  
65:            return newByte  
66:       else:  
67:            newByte = readBufferQueue.popleft()  
68:            return newByte  
69:  #-------------------------------------------------------------------------------  

Content of the SerialComm.py file

1:  import ProtoSLIP  
2:  import termios  
3:  import serial  
4:  #-------------------------------------------------------------------------------  
5:  # This function connect and configure the serial port. Then returns the file discripter  
6:  def connectToSerialPort():  
7:       serialFD = serial.Serial(port='/dev/ttyUSB0', baudrate=115200, bytesize=8, parity='N', stopbits=1, xonxoff=False, rtscts=False)  
8:       # port='/dev/ttyUSB0'- port to open  
9:       # baudrate=115200  - baud rate to communicate with the port  
10:       # bytesize=8           - size of a byte  
11:       # parity='N'           - no parity  
12:       # stopbits=1           - 1 stop bit  
13:       # xonxoff=False           - no software handshakes  
14:       # rtscts=False           - no hardware handshakes  
15:       if serialFD < 0:  
16:            print("Couldn't open serial port")  
17:            return -1  
18:       else:  
19:            print("Opened serial port")  
20:            return serialFD  
21:  #-------------------------------------------------------------------------------  
22:  #-------------------------------------------------------------------------------  
23:  # This function accept a byte array and write it to the serial port  
24:  def writeToSerialPort(serialFD, byteArray):  
25:       encodedSLIPBytes = ProtoSLIP.encodeToSLIP(byteArray)  
26:       byteString = ''.join(chr(b) for b in encodedSLIPBytes) #convert byte list to a string  
27:       serialFD.write(byteString)  
28:       return  
29:  #-------------------------------------------------------------------------------  
30:  #-------------------------------------------------------------------------------  
31:  # This function reads from the serial port and return a byte array  
32:  def readFromSerialPort(serialFD):  
33:       i = 1  
34:       byteArray = None  
35:       byteArray = ProtoSLIP.decodeFromSLIP(serialFD)  
36:       if byteArray is None:  
37:            print "readFromSerialPort(serialFD): Error"  
38:            return -1  
39:       else:  
40:            return byteArray  
41:  #-------------------------------------------------------------------------------  
42:  #-------------------------------------------------------------------------------  
43:  # This function reads from the serial port and return a byte array  
44:  def disconnectFromSerialPort(serialFD):  
45:       serialFD.close()  
46:       return  
47:  #-------------------------------------------------------------------------------  

SerialComm.py file should be imported from a user program and call the functions appropriately. I hope comments I have put in the code will make understanding of functionality of the program clear enough. Some information like the exact serial port we are opening, baud rate, parity, etc has to be edited in the code according to the requirement.

I hope my code will help someone. Cheers!



Multitasking in life: A good idea ?

From the beginning of my research life, I have been working on multiple tasks at the same time. At the same time means I had to interchangeably do several tasks in a day without completely focusing on a specific one for a long time. However sometimes I had to involve in a single work since there were no any other task to be done. The stressful workload I'm handling these days made my mind to review my way of working and reorganise it if necessary.

    I have been thoroughly reviewing research papers and different documents for research purposes. Now I have an important requirement to review my life and my working pattern in a similar way to find any defects of it. The experiences of last few weeks showed me some important issue in multitasking. I have so many important works to be done and unfortunately almost all of them seems need high priority. Moreover each of those tasks takes a significant amount of time and effort. I did up to now and I will do my best in the future to make all those works done simultaneously but I have a bad feeling that doing things in this way does not result in a good quality work.

    When comparing to the days where I did one important task a day, doing things interchangeably seems a very bad idea. Unlike computers, my mind is not very good at multitasking. When I switch between multiple tasks it seems I'm not making any good progress in each of the tasks. I have a feeling that if I do all these works in a sequential manner I could complete all the works before completing things by doing parallel even with a much more good quality. This is because when I involve in a single work for a longer time I get very good amount of time to think about it. Fresh ideas and innovation fills my mind making the work really successful. However when doing things parallel, before my mind settle down on one work I have to switch the task. Therefore it's hard to keep the focus on what I'm doing right now resulting in less quality work.

    OK, having the understanding about single tasking is better for me than multitasking, why do I still keep doing multiple tasks interchangeably everyday? This is the most important question. I don't have the control of my life completely. There are things I have the control and there are a lot more things which are out of my reach. Sometimes it seems I'm not very good at identifying things which I have my control. For example I usually hesitate to say 'No' to people and because of that I trap in works which I really don't have to do. However there are some works which are actually out of my control and therefore I have to do such works somehow. For example main research project works in our lab, my final year research project works and also other academic works like assignments, etc are out of my control. Therefore those works come into my 'To Do' list with higher priorities and I have to find some time slot for all those works in my busy schedule.

    This is really a problematic situation. Last few days I was so much stressed. Specially yesterday evening I could not figure out how I'm going to make any progress in my life in this way. Therefore yesterday when I went back to my boarding place I directly went to sleep without doing anything else. This morning I thought I should write down my situation because it helps when bringing the thoughts out from my mind and into some different form. So, that's what I did right now. I will find better ways to organise my works in particular and organise my life in general in the future from now on.

Monday, September 24, 2012

Running Linux on a Raspberry Pi

Raspberry Pi is a modern single board computer which can run Linux on it. No need to explain the advantages of Linux comparing to any other operating system that can run on low resourced embedded platforms. Unlike other embedded platforms where Contiki or TinyOS can run, this new platform Raspberry Pi with Linux provides almost all the capability we can have on a standard computer.

Recently we received a Raspberry Pi to our lab for a project work and I received the opportunity to work with it. It is a product of Raspberry Pi Foundation in UK. There are different customised distributions of Linux available for Raspberry Pi device. We boot this device from a live SD card. It has ports to connect different peripherals like audio and video devices, keyboard and mouse. However most prominent way to use this device is to connect it to a LAN via the Ethernet port and then log in to the device remotely from a SSH terminal.
 In addition to the standard ports available in the board, Raspberry Pi contains some more GPIO pins which can be used to connect different other external devices to the Raspberry Pi. By searching on the Internet I found various interesting applications and projects  done by using Raspberry Pi devices. Therefore it seems like Raspberry Pi is going to dominate the embedded systems world.


Friday, September 21, 2012

Configuring a DHCP server on Ubuntu 11.04

In our LAN we add a static IP address to our machines and access the network. However yesterday we received a newer device to the lab which is preconfigured for DHCP. So, I wanted to connect it to our LAN for testing without changing it's configurations. Actually to change this devices configurations I have to login to it via SSH that means it has to be connected to the network. Therefore the only way I had was to temporarily set up a DHCP server in our lab so that our new device can acquire a IP address from the DHCP server.

I had to search the web to find how to do it since I hadn't done such a thing before. To avoid forgetting what I did, I'm writing it down here. The machine I used to set up the DHCP server is running Ubuntu 11.04. So, here's the steps I followed.

1. Open a terminal and issue the following commands to install the DHCP server.

      sudo apt-get update
      sudo apt-get install dhcp3-server

At the end of the installation it will show an error message saying it couldn't start the DHCP server. This is OK since we haven't still configured the server. After configuring we can start it manually.

2. Now issue the following command to open the configuration file of the DHCP server.

      sudo nano /etc/dhcp/dhcpd.conf

3. It's time to add our configuration details of the DHCP server. Here's the information I have about my requirement. The IP address of the DNS server is 192.248.16.91. Gateway IP address is 10.16.79.254. Our network address is 10.16.68.0. Netmask is 255.255.255.0. Broadcast address is 10.16.79.255. 

I need my DHCP server to assign IP addresses to requesters in the IP address range from 10.16.68.60 to 10.16.68.65. So, remove the current content in the opened file and add the following content. I have included my configurations and therefore anyone else have to put their correct information.

 ddns-update-style none;  
 option domain-name-servers 192.248.16.91;  
 default-lease-time 86400;  
 max-lease-time 604800;  
 authoritative;  
 subnet 10.16.68.0 netmask 255.255.255.0 {  
     range 10.16.68.60 10.16.68.65;  
     option subnet-mask 255.255.255.0;  
     option broadcast-address 10.16.79.255;  
     option routers 10.16.79.254;  
 }  

After adding the content, save and exit from the nano editor.

4. Its time to start the server. You can start it by issuing the following command.

      sudo /etc/init.d/isc-dhcp-server start

5. Now we can check whether the DHCP server works. For that I connected another computer to the LAN and put it on DHCP mode. So, this second computer should acquire a IP address from my DHCP server. By issuing a 'ifconfig' command on this second machine I realised that it has acquired the IP address 10.16.68.62 which is between the rage I mentioned in the DHCP server. So it works.

We can see what is going on from the DHCP server running machine by issuing the following command.

      sudo tail /var/lib/dhcp/dhcpd.leases

It showed the details of the second machine which acquired a IP address from the DHCP server. Additionally following command can be used to see the activities of the DHCP server.

      tail -n 100 /var/log/syslog

So, now our DHCP server works fine. The actual reason for setting up a DHCP server in our lab was we recently received a Raspberry Pi single board computer. The operating system I used to boot it is preconfigured for DHCP. So, that's why I needed a DHCP server to test our Raspberry Pi.




Thursday, September 6, 2012

Simulating Wireless Networks With GloMoSim

During my research seminar days I came across two important research papers which introduced me to a wireless network simulator which I hadn't use before. First paper was "Hierarchical Geographic Multicast Routing for Wireless Sensor Networks" presented by Ravinda while the other one was "Design and analysis of a leader election algorithm for mobile ad hoc networks" presented by Chathuranga. In both of these papers, the evaluations of their solutions are performed using the simulator called GloMoSim. So, I wanted to try this tool.

GloMoSim is an even-driven, packet-level simulator. It is written using a language called PARSEC. The reason to use PARSEC is it is a language specially designed to implement simulators. GloMoSim comes with various implementations of routing protocols, MAC protocols, etc. Therefore it is pretty easy to setup a network and simulate different scenarios. In addition to those default features GloMoSim can be easily extended by adding our own protocols and applications for our research purposes.

In this article I will write down the steps I followed to run a simple simulation starting from downloading GloMoSim source code. This article which I found while searching on the web helped me a lot. I installed and worked with GloMoSim on an Ubuntu 10.10 machine. So, here we go.

1) Go to GloMoSim download page and download the  2.03 version which was the latest one available by the time I write this. Then extract the compressed directory to get the directory named as glomosim-2.03. Let's say you have extracted it to your desktop.

2) Now open a terminal and go on to this uncompressed directory. Inside this directory there's a separate directory named as parsec which contains the PARSEC compiler which is used to compile GloMoSim. For Ubuntu, we use redhat-7.2 version of it. So go to this directory located at "~/Desktop/glomosim-2.03/parsec/redhat-7.2/bin".

  cd ~/Desktop/glomosim-2.03/parsec/redhat-7.2/bin

This directory contains two files parsecc and pcc. Copy them to /usr/bin directory.

  sudo cp parsecc /usr/bin/
  sudo cp pcc /usr/bin/

3) Now we need to add the directory path to the environmental variable. So, open the .bashrc file by issuing following command.

  gedit ~/.bashrc

Add the  following content to the end of this file.

  PCC_DIRECTORY=~/Desktop/glomosim-2.03/parsec/redhat-7.2
  export PCC_DIRECTORY

Now you have to restart the machine. To avoid restarting the machine, you may add that on the terminal instead of adding to the .bashrc file.

4) It's time to compile GloMoSim. So, do the following.

  cd ~/Desktop/glomosim-2.03/glomosim/main
  make

If every things are done properly, you will see the compilation process. You have to wait until it completes. Then our initial works are over.

5) Now we can run a sample simulation on GloMoSim. For that go to bin directory as follows.

  cd ~/Desktop/glomosim-2.03/glomosim/bin

There's a file named as config.in which contains the configurations of a simulation. Another file called app.conf contains the configurations of each node in the simulated network. You can find these two files in this current directory. GloMoSim use those configurations to simulate a network. Now lets run the default simulation configured in these files. So, issue the following command.

  ./glomosim config.in

You will see an output like the following on the terminal when the simulation runs.





















For our evaluation purposes, the statistics of the simulation is written to a file named as glomo.stat which we can open and view the data statistics.

  cat glomo.stat

Contents of the file may look like the following.


















More details of the simulator can be found by reading through the GloMoSim website. Additionally those two configuration files I previously mentioned contains lot of comments which are pretty much self documenting. Therefore it's worth reading through all these things.

I'm hoping to write another article about adding a new application layer functionality to simulated nodes in GloMoSim. Until then, that's all I got for now.

Friday, August 10, 2012

Myna's Struggle, Two Girls and Computer Science


Today morning I could just come closer to the main entrance of UCSC building complex. I heard a huge noise. There were three birds on the ground in front of the auditorium. Two were Crows and the other one was a Myna. The crows were pulling the Myna from two sides and Myna tried it's best to escape from this trouble. I didn't think twice even though I could be interrupting some natural event happening in the environment everyday. I didn't care. I just wanted to save the Myna.

When I run towards them, the two crows flew away while the Myna was still on the ground. One leg of the Myna was trapped in some kind of a nylon string. Because of that it was in trouble. I think the Crows were trying to take the advantage of it. Anyway I tried my best to remove that string but it was not that easy. And also the bird seemed so weak and I thought it was going to die. At this time a good idea came to my mind. The Department of Zoology in University of Colombo has lot of bird lovers. I have seen some events they organise every year about birds and different kinds of other animals. I thought they could do something. So I took the bird and walked towards the Department of Zoology.

Near the department, there were two girls having a chat and I talked to them. When they saw the bird they quickly responded by taking it from my hand. They said they can fix that nylon string problem and take care of the bird. Then they ran into their department building. So, there was nothing left for me to do about the bird.

While I was walking back to our UCSC building, some nice things came into my mind. When I was doing Mathematics back in my A/Ls, I felt like Mathematics is the greatest thing in this world and any other subject is just nothing in front of it. When I started to do Computer Science in university I felt like CS is the most powerful collection of knowledge within the human knowledge boundary. Sometimes it felt like we know a huge part of human knowledge base when I approach my final year. Today I learnt the lesson of the importance of experts in different fields thanks to the Myna, two Crows and the two girls. If we didn't had people with different interests and passions on different subjects this world would have become so different than today. Thanks to this diversity of passions, humans are still ahead of any living being we know in this universe.

Friday, August 3, 2012

An Unforgettable Collaboration

About a year ago I received the opportunity to work with a wonderful person on a wonderful research journey which is now about to end. Lakmal Weerawarne (for me, Lakmal Aiya) worked as a research assistant at WASN lab playing a leading role in our most of the research works in Sustainable Computing Research (SCoRe) group. He is now leaving us to work for his Ph.D at University of Binghamton. I closely worked with him in most of the project works throughout past time period producing lot of unforgettable memories which will last a life time. All those moments of challenges, difficulties and happiness move around my head just like everything happened today. This is my personal memoir of that wonderful collection of life experiences I earned working with him.

    During my third year first semester at university everybody in my batch were so busy finding a place to do their internship training in the second semester. I didn't had any rush or uncertainty in my mind since I had decided how and where I should spend my internship period. It is Wireless Ad-hoc Sensor Networks (WASN) laboratory at UCSC. I had been voluntarily contributing to some of research works at WASN lab since my second year and therefore it wasn't a new place to me. I knew that if I need a semester full of research, WASN is where I should be. In this way I became a member of WASN lab in my internship started in March, 2011.



    From the beginning Dr. Kasun, my supervisor assigned me to different project works where I played a supportive role to the research assistants who were mainly working on those projects. He had a plan to start a new project of deploying the database abstraction of sensor networks on a smart home application. This work goes beyond the conventional sensor network hardware platforms we were working with those days since we needed lot of specialized hardware and related low level software components to make this new project a success. Dr. Kasun said that a new research assistant will join with the lab to work on this project from the Physics department of University of Colombo. I was supposed to work with him in this project after this recruitment.



After about a month, Lakmal joined with us. Even though he was not coming from a computer science background, he seemed a quick learner. He easily understood everything we were working on our sensor network projects and immediately jumped into the subject matter. He was really passionate on embedded and ubiquitous systems related works making him the main contributor in this smart home project. We become good friends from the beginning. The early days of the project went smoothly without any issues and we were generally easy going. Later on, unbelievable challenges and tough times occurred in our works and when I look back about those days, I feel we would not get through those tough times if both of us didn't had the required amount of patience and the trust on the capabilities of each other.

    A typical day starts with some particular part of our problem and we work on it carefully. Lakmal set up hardware components and write low level drivers for them while I work on the high level database abstraction layer works. When we integrate all these things and program our applications to MCU in sensor nodes, unexpected results come out. Sometimes it takes several hours or even worse several days to figure out which went wrong in which component. We missed our lunch so many number of days since both of us didn't wanted to stand up from our seats to go for the meals leaving our mind blowing problem alone in the lab. When we felt too tired, we went to the canteen at about 4 or 5pm and ate something while drinking tea. However our typical days didn't end here. We usually left university at about 8.00pm after struggling with our project works. If the university does not close at 8.00pm, I'm pretty sure we would have even stayed all the time working on our project. There's a nice array of words which can explain our situation in those days briefly. "When the going gets tough, the tough get going" That's exactly what happened. The more the problems got tough, the more we got encouraged to fix them.



    A different kind of a task was done during this time period adding another set of experiences. A school in Ampara district had received some grants to buy few computers for their school. The principal of that school had heard about our Linux based PokuruPC project where a single computer system unit can be used by 4 people with 4 monitors, 4 mouses and other peripheral devices. Even though they had money to buy about 5 ordinary computers, if they use PokuruPCs they could have 12 computers. Dr. Kasun asked our research group to visit that school in Ampara to set up their computer lab. So our group including Lakmal and me went their and stayed at Ampara for about 2 days and we did a little workshop for the school kinds to make them familiar with computers and Linux. I wrote an article about this journey those days which you can find here.

After my internship period I started my 4 year at university and therefore I couldn't be at the WASN lab all the time. However still our collaboration on the project works continued. Another nice thing was the WASN course in the 4th year first semester. While Dr. Kasun handles the lectures of that course, Lakmal was the person who carried out all the practical stuff. When comes to course works I think he is a little bit tough guy. All the reports and other stuff had to be properly prepared and after each practical in every week he asked questions from the students which were not that easy to get through sometimes. We couldn't do some serous work together after I started my 4th year since I had my own works to do. But after some new interns from my junior batch started working at WASN lab we all did some more improvements to the smart home project making it much more usable.

In my short life time in research works at university, I have been working with different people on different projects and I can honestly state that Lakmal is the best research colleague I ever had. I learned a lot from him and he improved my enthusiasm towards research works and academic life to a great extent. So, even though I'm losing a great research colleague I ever had, I wish him success in every step he move towards his Ph.D and the rest of his life and moreover I wish one day we will again get a chance to work on a tough research work and publish together.
"When the going gets tough, the tough get going"


Monday, July 23, 2012

Talk On Enix Operating System

Today in our research seminar, I did a presentation on a research paper. The paper I selected was "Enix: A Lightweight Dynamic Operating System for Tightly Constrained Wireless Sensor Platforms" [link] which is published in year 2010 at ACM SenSys conference. Since Operating Systems is one of my favourite courses I learnt in university, I wanted to present an OS related paper in the research seminar. Additionally, since this Enix OS is intended for wireless sensor network platforms it is related to my final year research too. Therefore I picked this paper.











Enix is currently implemented to run on an experimental platform called EcoSpire nodes. However according to the authors it can be easily ported to other platforms too. Among the features provided in Enix, the most significant work they contribute according my point of view is the virtual memory implementation.

Virtual memory functionality in Enix


Enix supports virtual code memory with the assistance of the compiler. Library functions are compiled in to position independent code (PIC) segments and stored in the Micro-SD card. All the user application calls to these functions are directed to a special run-time loader in the kernel space. This loader finds the required functions in the Micro-SD card and load it to an empty space in memory. Since these functions are compiled in position independent manner, no run-time relocations required. Copied functions can just start running from wherever they loaded. This is how the overhead of run-time relocation is removed in Enix with the assistance of compiler. However I think position independent codes introduce some computational overhead than normal executable codes. In  addition to the virtual memory, Enix provides an it's own file system called EcoFS. 

After my presentation we had a nice discussion where we talked about different aspects in particular Enix and in general about WSN operating systems. Even though I was a little bit nervous before the presentation, I think I could do it without any issue. :) I'm feeling so much happy about it.

Thursday, July 5, 2012

Dr. Erik Billing is here



















Dr. Erik Billing from UmeÃ¥ University of Sweden visited here. As I heard he just defended his Ph.D in last January. Even though he writes his name as "Erik", it is pronounced like "Earik" or something. Anyway we were allowed to call him "Erik" just like we pronouns an English name. :)  I received few opportunities to attend to his lectures conducted during this visit. He is mainly interested at artificial intelligence and intelligent robotics like stuff. He is a nice guy.

His Ph.D dissertation


The title of his Ph.D dissertation is Cognition Rehearsed - Recognition and Reproduction of Demonstrated Behavior. Since he gave some copies of his dissertation to some other people, I got a little chance to take a look at it and it was the first time I saw a Ph.D dissertation. Unlike the thesis we are writing for our B.Sc degree, this Ph.D thesis have a very nice professional look. I don't know whether usually doctoral thesis's of any other person is like that. However this one is good.







During his lectures, he discussed about different theoretical aspects mainly in Robotics area. Additionally he did some demonstrations with robotics simulators. He said that he will stay here for few weeks more, so I hope I will get more chances to talk with him and learn something.
There's an article with his idea's here (English translation).

Wednesday, June 13, 2012

In Love With Computer Science



There's a nice article written by a senior lecturer of UCSC, Dr. Chamath Keppitiyagama about the Turing Machine. I really like that article since I'm a big fan of theoretical computer science. It is a good attempt to show up an important concept to the ordinary people. I thought to repost it's content here to make it secure so that if the original article goes off, I won't miss it. :)

Link to original article: http://www.nation.lk/edition/component/k2/item/3666-a-computer-for-hundred-rupees?.html


A computer for hundred rupees?

No, I did not miss by several zeros – I meant it. We can build such a cheap computer and I guarantee that it can solve any problem that can be solved by those fancy and expensive computers. Now do not rush to buy processors, power supplies and other equipments, because we do not need them!

You can get the following ‘parts’ under hundred rupees. You need a long paper tape (a very long one), a pencil, an eraser, and a note pad. Take the long paper tape and using the pencil draw equal-sized squares along the tape.

That is it. Now there are some rules to follow. You can move the tape one square to the left or one to the right. You can write ‘1’ or ‘0’ on the square directly under your hand. If you wish you can erase anything on the square directly under your hand. After taking any action you can write a number in the note pad to indicate your ‘state’. The note pad has a table that you can consult before taking any action. This table tells you what to do – write, erase, move left or right – when you are in a given state and after reading a particular value from the tape. All that you can do is limited to those few actions.

That is your 100 rupee computer. I am not joking, but this machine is so powerful that it can solve any problem that can be solved on fancy computer. In fact, if a problem can be solved using a computing device, this computer can solve it. If this computer cannot solve a problem then it can never be solved using even a super computer. Unbelievable, isn’t it? Yet, it is true.

This machine is called a Turing machine. It was designed by Alan Turing in 1936, quite a long time before anybody made a computing device that resembles modern day computers. He did not propose it as a machine to be built and used for actual computation, even though you can easily build one if you really want to and use it for computation. Alan Turing proposed the Turing machine as a model of computation. He proposed that if any problem can be solved using computers this extremely simple machine can solve it and only the problems that can be computed on this machine can be solved on computers. It makes our 100 rupee computer a very powerful one indeed!

I have simplified the description of the machine quite a bit. A Turing machine needs an infinitely long tape, but let us ignore those details. Initially, the problem to be solved is written on the tape using ones and zeros on the squares (strictly speaking they do not even have to be ones and zeros, they can be any symbols). When the machine stops – that is when you stop moving the tape back and forth – the answer will be on the tape encoded in ones and zeros.

Computer scientists and mathematicians use this simple machine to study whether problems can be solved on computers and if so what are the most efficient ways to solve them. They do not actually build the machine, but use it to think of computing as a sequence of those actions allowed in the Turing machine. Even though they look very complicated, modern day computers try to mimic this machine. The role that you played in our 100 rupee computer is played by the processor and the memory chips take the place of the paper tape. The electronic processor is just faster than you and the problems can be solved faster, but apart from the speed, modern computers are not more powerful than a Turing machine. In fact, they are less powerful since none of them have an infinite memory whereas the Turing machine has an infinitely long tape (memory).

The power of this simple machine does not stop there. I am sure you have heard of those wonderful programming languages such as Java, C# and Python. Those programming languages are no more powerful than the simple set of instructions on our rudimentary computer. If you can write a nice program using those languages, you can write the same one using our simple set of instructions, but that would be tedious. Most of us have the notion that computers are all powerful devices. Well they are in some ways, but so is our Turing machine! I am sure now you have lost your faith in computers.

(The writer is a Senior Lecturer, University of Colombo School of  Computing)

Wednesday, June 6, 2012

Gnuplot For Visualizing Data


When writing research papers and technical documents, displaying data in a useful manner is really important. Specially when presenting different evaluation results in a statistical form in documents we  need to draw different types of graphs. Since it is important to me, I was looking for a good tool for that for sometime. Since I'm using Ubuntu Linux, the best recommended one I could find on the Internet was Gnuplot. It's a free tool and very powerful to draw different types of graphs.


Since it is a command line tool it takes some time to learn about how to use Gnuplot for our requirements. Therefore I postponed my Gnuplot learning process several times and finally today I got some first hand experience. At first I found this blog post. Then it sent me to this tutorial which contained so many helpful information. I recommend it if you really want to learn to use Gnuplot tool from the beginning.

To make my mind easy to memorise what I have learnt up to now, I decided to write a little summary of the basic things I need to use Gnuplot. So, here we go.

 1. To install gnuplot on my Ubuntu-11.04 system, I issued the following command in the terminal.

         sudo apt-get install gnuplot-x11

2. No I can start gnuplot by issuing the following command.

    gnuplot

It will show a prompt like the one showed in the above picture and all the other thing has to be done on this prompt. If you want to exit from the gnuplot, type 'exit' on the prompt.

3. Now let's create an example plot and write it in to an image file. To do that issue the following commands on the prompt.

    set terminal jpeg 
    set output 'our_file_name.jpg'
    plot sin(x)

Now in a new terminal you can go to the working directory of the gnuplot and open the 'our_file_name.jpg' file to view your graph. You can find out the working directory of gnuplot by typing 'pwd' on the prompt just like you do in normal terminal. To change the working directory use 'cd' as follows on the prompt.

cd "/path/to/new/directory"

4. To add the names of the graph, the x axis and the y axis use the following commands.

         set title "My graph title"
    set xlabel 'x axis'
    set ylabel 'y axis'

After changing these settings we have to re-run the previous plot command with new changes. We can do it in shorten way by using 'replot' command as below so that you don't have to write a plot command again and again since sometimes plot command is too long with so many parameters.

    set terminal jpeg
    set output 'plot.jpg'
    replot

5. Even though we set different parameters in gnuplot as above, they are gone if we exit from gnuplot and start it again later. Therefore it's useful to save current configurations so that we can load them again.

View the current configuration by issuing the following command.

    show all

You can save the configuration to a file as below.

     save "savefile.plt"

You can load a saved configuration from a file by the following command

      load "savefile.plt"

6. Now we want to draw a graph using the data which comes from another data file. Consider the following file named 'my_data.dat'.

 1 10 11   
 2 20 23   
 3 30 34   
 4 40 45   
 5 50 56   
 6 60 67        
 7 70 78   
 8 80 89   
 9 90 100  

We have three columns of data in that file. Let's draw a graph with first two columns.

      set terminal jpeg
     set output 'first_graph.jpg'
     plot "./my_data.dat" using 1:2 title "First line" with lines


first_graph.jpg


















Now let's draw two graphs in the same figure. The first graph use first column as x-axis and second column as y-axis while the second graph use the first column as x-axis and third column as y-axis. Here's how we do that. 

     set terminal jpeg
     set output 'second_graph.jpg'
     plot "./my_data.dat" using 1:2 title "First line" with lines, "./my_data.dat" using 1:3 title "Second line" with lines


second_graph.jpg



















There are so many other things to learn in Gnuplot to create nice graphs. I believe that learning gnupot would not be a wastage of time since good data visualisation is an important skill which should be in a computer scientist. So, I should expertise it in the future.  :)

That's all for now. Enjoy drawing graphs with gnuplot!

Tuesday, June 5, 2012

Moving Into an Important Time

I couldn't write something useful here for a considerable time. Even though I'm really busy on my final year project works I was unhappy about leaving my blog without writing something. I'm writing this post to make my mind although there's nothing useful in it.

Currently I'm in a crucial time period in my final year project. The designing phase went without any pain but now I'm facing so many issues from the beginning of implementation phase. So, for about a month I had a huge pressure in my head. But I think now most of those issues are solved to a considerable extent and therefore I hope to move in a straightforward process. That's about my project.

EasyPic Board (thanks to Chathika for the picture)

In this semester I'm taking the Robotics & Embedded Systems course and therefore I have to work with electronic stuff. Even though I'm working closely with embedded platforms from a longer time in Wireless Sensor Networks research work, I hadn't step into much lower hardware level. In this semester with this new course I think I will earn the necessary skills in electronics that I was lacking for years.


There's another important thing we have ahead. It's the ICTer 2012 conference. International Conference on Advances in ICT for Emerging Regions (ICTer) is mainly organised by our university with so many partners. As the SCoRe research group we hope to submit a paper to it. So, currently we are completing the paper writing task these days. The call for papers notice can be found here.

So, in every aspect it's a busy time. But I will try my best to keep my blog alive since this is the place which allows me to write anything I like.

Tuesday, April 10, 2012

A good tool for designing research posters














Recently I had to design a poster for a conference. I was thinking about which tool to use for it. One of a researcher on our lab suggested me to use a free and open source tool called Scribus. I hadn't used it before but found that it is really helpful for designing the poster. 


















Even though I'm not good at designing posters like stuff, I had to put a big effort to come up with my poster. Following image shows how it finally looked. It may not have been designed considering all the Do's and Dont's  of designing academic posters since I hadn't enough time to worry about them.

































I hope Scribus will be the tool for me to prepare posters for a long time from now on.

Wednesday, April 4, 2012

Science and Music all together - The Symphony of Science

Recently I found a very nice website called The Symphony of Science which contains an excellent set of music with the voices of so many great people. I downloaded some of the music videos and they are really inspiring.

I would like to share two of those music videos which are my favorites.