Yi Tang Data Science and Emacs

etags - Build a TAG for Multiple R Packages

Here is what tried to build a TAG for multiple R packages. It enable me to jump to a location where the function/variable is defined and modify if I want to.

Useful variable and functions

ess-r-package-library-path
default path to find packages, should be a list
ess-r-package-root-file
if the folder has DESCRIPTION file, then the folder is a R package.
(ess-build-tags-for-directory DIR TAGFILE)
build tag on DIR to TARGET.
tags-table-list
List of file names of tags tables to search.
(visit-tags-table FILE &optional LOCAL)
Tell tags commands to use tags table file.
;; new variable 
(defvar ess-r-package-library-tags nil
  "A TAG file for multiple R packages.")

(setq ess-r-package-library-path '("~/tmp/feather/R" "~/tmp/RPostgres/"))
(setq ess-r-package-library-tags "~/tmp/all_tags")

(dolist (pkg-path ess-r-package-library-path)
  (let ((pkg-name (ess-r-package--find-package-name pkg-path)))
    (unless (and pkg-name pkg-path
                 (file-exists-p (expand-file-name ess-r-package-root-file pkg-path)))
      (error "Not a valid package. No '%s' found in `%s'." ess-r-package-root-file pkg-path))
    (ess-build-tags-for-directory pkg-path ess-r-package-library-tags)
    ))

Note the workhorse is ess-build-tags-for-directory which does what it means. The core of this function use find and etags program. The find program will find files with extension .cpp, R, nw etc, and then feed to (using pipe) to the etags program which generate a TAG table. These two steps are demonstrated in the following snippet, which is grabbed from the source code of ess-build-tags-for-directory.

(setq find-cmd (format "find %s -type f -size 1M \\( -regex \".*\\.\\(cpp\\|jl\\|[RsrSch]\\(nw\\)?\\)$\" \\)" (car ess-r-package-library-path)))

(setq regs (delq nil (mapcar (lambda (l)
                               (if (string-match "'" (cadr l))
                                   nil ;; remove for time being
                                 (format "/%s/\\%d/"
                                         (replace-regexp-in-string "/" "\\/" (nth 1 l) t)
                                         (nth 2 l))))
                             imenu-generic-expression)))
(setq tags-cmd (format "etags -o %s --regex='%s' -" "~/lala"
                       (mapconcat 'identity regs "' --regex='")))

(setq sh-cmd (format "%s | %s" find-cmd tags-cmd))
(shell-command sh-cmd)

Note when they are used in Emacs, the tags-table-list variable is appended with the path to the new TAG table. So that the user can use xref-find-definitions (M-.) to jump (if the point is under a word) or select which function/variable to jump to. The users then check the function/variable definition, or modify it if it is necessary. Then call xref-pop-marker-stack (M-,) to jump back.

Compare RPostgres and RPostgreSQL Package

R is a great language for R&D. It's fast to write prototypes, and has great visualisation tools. One of constraints of R is it stores the data in system memory. When the data becomes too big to fit in the memory, we asked the user has to manually split the dataset and then aggregate the output later. This process is inefficient and error prone for a non-technical user.

I started an R development project to automate this split-aggregate process. A viable solution is to store the whole data in PostgreSQL, and let R to fetch one small chunk of the data at a time, do the calculation, and then save the output to PostgreSQL. This solution requires frequently data transferring between these two systems, which could be a bottleneck in performance. So I did a comparison of two R packages that interface R and PostgreSQL.

RPosrgreSQL
is supported and developed in the Google Summer of Code 2008 program. It is currently out of development. The last publication is in 2013.
RPostgres
is a new package which provides similar functionality to RPostgreSQL but rewrite using C++ and Rcpp. The development is led by Kirill Müller.

Based on my testing, the RPostgres package is about 30% faster than RPostgreSQL.

The testing set-up is quite simple: I write an R script to send data to and get data out from a remote PostgreSQL database. It logs how long each task takes to complete in R. To avoid other factors that can affect the speed, it repeats this process 20 times and use the minimal run-time as the final score. The dataset transferred between R and PostgreSQL is a flat table with three columns and the number of rows varies from ten thousand to one million.

The run-time in seconds are plotted against number for rows for each package and operation.

nil

Here is a summary of what I observed:

  1. RPostgreSQL is slower than RPostgres. For getting data out, it's 75% slower, which is massive! For writing, difference is closer, it's about 20%. When combine both scores together, it is about 33% slower.
  2. Particularly, it's slower to read than to write for RPostgreSQL package, the ratio is about 1.5. While as it's quicker to read than to write for RPostgres, the ratio is about 0.8. This is an interesting observation.
  3. Both package has a nice feature - the reading/writing time linearly depends on the number of rows. This makes the time estimation reliable. I would be confident to say that for 2 millions rows, it takes RPostgres package about 6 seconds to read.

I don't why which part of implementation makes the RPostgres faster. I guess its the usage of C++ and the magical Rcpp package.

Here is the script just in case you want to your own tests.

library(data.table)                     
library(ggplot2)
library(microbenchmark)
library(RPostgreSQL)
library(DBI)   
                                        # config for PostgreSQL database
host.name <- NULL
database.name <- NULL
postgres.user <- NULL
postgres.passwd <- NULL
postgres.port <- NULL
temporary.table.name <- NULL

                                        # config for testing
nrows <- seq(10 * 1e3, 1 * 1e6, length = 10)
repeats <- 20


                                        # open PostgreSQL connection
pg.RPostgreSQL <- dbConnect(dbDriver("PostgreSQL"),
                           host = host.name,
                           dbname = database.name,
                           user = postgres.user,
                           password = postgres.passwd,
                           port = postgres.port)
pg.RPostgres <- dbConnect(RPostgres::Postgres(),
                         host = host.name,
                         dbname = database.name,
                         user = postgres.user,
                         password = postgres.passwd,
                         port = postgres.port)

ReadWriteWarpper <- function(pg.connection) {
                                        # helper function 
    write <- function() dbWriteTable(pg.connection, temporary.table.name, dt, overwrite = TRUE)
    read <- function() dbReadTable(pg.connection, temporary.table.name)

    var <- list()
    for (n in nrows) {
                                        # create a dataset
        dt <- data.table(x = sample(LETTERS, n, T),  # character
                        y = rnorm(n), # double
                        z = sample.int(n, replace=)) # integer

                                        # read and write once first.
        write()
        read()

                                        # run and log run-time
        res <- microbenchmark(write(),
                             read(),
                             times = repeats)

                                        # parse 
        var[[as.character(n)]] <- data.table(num_row = n,
                                            operation = res$expr,
                                            time = res$time)
    }

                                        # aggregate and return
    rbindlist(var)
}

                                        # run
df0 <- ReadWrite(pg.RPostgres); df1 <- ReadWrite(pg.RPostgreSQL)
df0$pacakge <- "RPostgres"; df1$package <- "RPostgreSQL"
df <- rbind(df0, df1)
plot.df <- df[, min(time) / 1e9, .(num_row, operation, package)]

## generate plot
plot.df[, operation := gsub("\\(|\\)", "", operation)]
ggplot(plot.df, aes(x=num_row, y=V1, col = package)) +
    geom_path() +
    geom_point() +
    facet_wrap(~operation) +
    theme_bw() +
    labs(x="Number of rows",
         y="Run time (sec)"
         )

How to Create a Screencast GIF in Emacs

nil

I've always wanted to create a GIF using Emacs to demonstrate some features, it just looks so cool. I finally got a chance after attending the Leeds Code Dojo. The final exercise is bit unusual; we have to write a basic expression evaluation program without using the eval function in whatever language we choose. The first problem we had was to figure out the order of sub-expression to evaluate. For example, in (5 * (2 + 1) ) expression, we know we firstly add 2 to 1 to get the 3, and then multiply 3 by 5. It sounds trivial but it is actually hard to write a program to do that.

I used regular expression1 to locate the most inner expression to evaluate, then replaced the expression with its evaluating result, and continued these two steps until there was no expression2.

The above GIF shows each step in a expression evaluation program written in Emacs Lisp.

This post show how to make GIF in Emacs on Ubuntu system.

Dependencies

There are three packages to install first. We need recordmydesktop to capture the motion of the screen, mplayer to view the video, and imagemagic to convert the recorded video into GIF file. They can be installed easily using the apt-get command, as in the following bash shell script:

sudo apt-get install recordmydesktop mplayer imagemagick

On Emacs side, I use camcorder package to control the workflow. It is hosted in MELPA repository, and can be installed by

(package-install 'camcorder)

Then everything should work nicely together.

Workflow

After these packages are installed, creating a GIF is simply, only requiring three steps.

1. Initiate the recording

In Emacs,

  • Switch to the buffer we want to record, let's call this buffer the recording buffer,
  • Initiate the recording by M-x camcorder-record command,
  • Choose where to save the video file, then

A new frame with the recording buffer will pop up. It is wrapped inside a white rectangular box. Everything inside the box will be recorded and saved in the video file. Note, if we move the window or overlay it with other windows, we probably get undesired results.

2. Record Choose the recording buffer/frame,

  • Press F-11 to pause/resume,
  • Show some cool things,
  • Press F-12 to stop,

Note the demonstration must have an effect on the recording buffer, and we can use with-current-buffer function to dump the output for a particular buffer, for example,

(with-current-buffer "Demo_Buffer"
  (insert "Start to demo: "))

will insert "Start to demo: " into the Demo_Buffer.

I found it is useful to wrap the demonstration into a function and bind to a key because I will probably run it many times.

(defun yt/camcorder-show-off ()
  (interactive)
  (goto-char (point-min))
  (insert "going to show you something cool, don't blink your eyes.")
  (sleep-for 2)
  ;;;; apply some functions
  (insert "\nExciting isn't?"))

(define-key camcorder-mode-map [f5] 'yt/camcorder-show-off)

There are two functions that are helpful control the flow. Use sleep-for function to let the program wait few seconds, and use y-or-n-p to let us choose whether to proceed or switch flow.

3. Make gif

After the demo is finished,

  • Type M-x camcorder-convert to convert a video file to a GIF file,
  • Choose a file name for the GIF file,
  • Select convert method, and choose use mplay with imagicstick.

We probably repeat the step 1-3 multiple times until we are happy with the GIF.

Footnotes:

1

Regular expression might not be suitable for this task, and it works

2

Everything is actually an expression

Migrate to Ubuntu

My MacBookPro's hard drive stooped working last week and I managed to recover most of the data from a Time Machine back-up 6 months ago. But I couldn't get the mu4e and mu working. I feed up with googling, trying, and decide to immigrate to Ubuntu. It would save me from a lot of frustrations and time in making my Mac and office PC work the same way.

Ideally, I will built a Ubuntu on Mac which is exactly the same as the one on my office PC, by just copy over everything 1. As a minimalist, I decided to build the system from scratch and install software one by one so that I can have an better understanding of what are the necessities for me.

In the last few days, I become extra mindful about the what and how I used the Ubuntu system in the office, and realise the things I need can be grouped into three categories:

  1. Configuration,
    1. the .ssh folder for the ssh-agent,
    2. the .fonts folder for new fonts,
    3. the .mbsynrc file for sync emails,
    4. the .ledgerrc.
  2. Software for
    1. Development: like git, gcc, Emacs, and R.
    2. Writing: org-mode, LaTeX,
    3. Email: mu, mu4e, and mbsync.
    4. Finance: ledger.
  3. Personal git repositories
    1. public reposity on GitHub,
    2. private reposities on BitBucket

For 1), since they are small, I can zip up and copy over, or even better, create a git repository so that sync on two machines becomes better easier.

For 2), I need to find the software's package name in the Ubuntu's software repository, and then install all of them by a script. The dependencies should be resolved automatically.

For 3), I need to create a shared folder between the host system and the Ubuntu system, and then copy over the ~/git/ folder.

It really sounds like a plan! I am going to download the Ubuntu installation file now and hopefully the transition will be very smooth.

My Expeirence with Repetitive Strain Injury (RSI)

Someday I typed more than 80 thousand times just in Emacs. This is pretty awesome at first sight but it can cause serious health problem.

Last month, I felt burning pain of my forearms. It is an symptom of Repetitive strain injury (RSI). I realised that if continue typing like that, one day I will never able to do programming, like the Emacs celebrities in Xah Lee' article about RSI.

Since then I've deliberately tried to avoid aimless and unproductive typing, take more typing breaks, think though things before trying, write more on paper.

Conditions are getting better: I don't feel server pain any more, only sometimes uncomfortable.

But I need to find a better way to improve it. Because sometimes I got the idea, but can't touch the keyboard. This feeling really suck.

So I investigated the Hydra package and use it to group related commands together so that use only two keys are needed to perform frequent tasks.

For example, to search something in current project, instead of typing M-x helm proj grep, that's 16 keystrokes, I only need F5 G with Hydra. The implementation is listed in this post.

But calling functions/commands in Emacs counts only a small proportion of my typing; most of the time, I write code and report.

This is where Yasnippets kicks in, it enable me to type less without losing quality. For example, I use this snippet quite often when writing R code,

res <- sapply(seq_len(n), function(i) {
    ## 
})

That's more than 40 keystrokes. Yasnippets can short it to only 6s! After I type sapply and then hit TAB, it will expand to the region above.

I will investigate the Yasnippet package next week. If you know any good tutorials for Yasnippet or snippets for writing R code, please share your resources.

If you have any questions or comments, please post them below. If you liked this post, you can share it with your followers or follow me on Twitter!