Tag Archives: Python

Vega.jl Rebooted – Now with 100% More Pie and Donut Charts!

By: randyzwitch - Articles

Re-posted from: http://randyzwitch.com/vega-jl-julia/

piedonut

 

 

 

 

 

Mmmmm, chartjunk!

Rebooting Vega.jl

Recently, I’ve found myself without a project to hack on, and I’ve always been interested in learning more about browser-based visualization. So I decided to revive the work that John Myles White had done in building Vega.jl nearly two years ago. And since I’ll be giving an analytics & visualization workshop at JuliaCon 2015, I figure I better study the topic in a bit more depth.

Back In Working Order!

The first thing I tackled here was to upgrade the syntax to target v0.4 of Julia. This is just my developer preference, to avoid using Compat.jl when there are so many more visualizations I’d like to support. So if you’re using v0.4, you shouldn’t see any deprecation errors; if you’re using v0.3, well, eventually you’ll use v0.4!

Additionally, I modified the package to recognize the traction that Jupyter Notebook has gained in the community. Whereas the original version of Vega.jl only displayed output in a tab in a browser, I’ve overloaded the writemime method to display :VegaVisualization inline for any environment that can display HTML. If you use Vega.jl from the REPL, you’ll still get the same default browser-opening behavior as existed before.

The First Visualization You Added Was A Pie Chart…

…And Followed With a Donut Chart?

Yup. I’m a troll like that. Besides, being loudly against pie charts is blowhardy (even if studies have shown that people are too stupid to evaluate them).

Adding these two charts (besides trolling) was a proof-of-concept that I understood the codebase sufficiently in order to extend the package. Now that the syntax is working for Julia v0.4, I understand how the package works (important!), and have improved the workflow by supporting Jupyter Notebook, I plan to create all of the visualizations featured in the Trifacta Vega Editor and other standard visualizations such as boxplots. If the community has requests for the order of implementation, I’ll try and accommodate them. Just add a feature request on Vega.jl GitHub issues.

Why Not Gadfly? You’re Not Starting A Language War, Are You?

No, I’m not that big of a troll. Besides, I don’t think we’ve squeezed all the juice (blood?!) out of the R vs. Python infographic yet, we don’t need another pointless debate.

My sole reason for not improving Gadfly is just that I plain don’t understand how the codebase works! There are many amazing computer scientists & developers in the Julia community, and I’m not really one of them. I do, however, understand how to generate JSON strings and in that sense, Vega is the perfect platform for me to contribute.

Collaborators Wanted!

If you’re interested in visualization, as well as learning Julia and/or contributing to a package, Vega.jl might be a good place to start. I’m always up for collaborating with people, and creating new visualizations isn’t that difficult (especially with the Trifacta examples). So hopefully some of you will be interested in enough to join me to adding one more great visualization library to the Julia community.

ModernGL vs GLEW vs PyOpenGL

By: Simon Danisch

Re-posted from: http://randomfantasies.com/2015/05/moderngl-vs-glew-vs-pyopengl/

Benchmark of ModernGL (Julia), GLEW ( C++) and PyOpenGL (Python).

glbench

Relative slowdown compared to GLEW:

glbench_table

 

Procedure:

Each function gets called 10^7 times in a tight loop. Execution time of the loop gets measured.

This got executed on windows 8.1 with an intel i5 and an intel hd 4400 video card.

Julia 0.4 has been used, the C++ version was compiled with VS13 and for python the anaconda distribution with Python 2.7 was used.

The OpenGL function loader from ModernGL has undergone some changes over the time.
Starting with a very simple solution, there have been pull requests to include better methods for the function loading.
The current approach in ModernGL master was not written by myself, but by the Github user aaalexandrov.
Before aaalexandrov’s approach, the fastest approach would have used a pretty new Julia feature, named staged functions.
It should in principle yield the best performance as it compiles a specialized version of the function when it gets called for the first time. This is perfect for OpenGL function loading, as the pointer to the function can only be queried after an OpenGL context has been created. When the staged function gets called the pointer can be queried and gets inlined into the just in time compiled function.

Staged functions only work with the newest Julia build, which is why aaalexandrov’s approach is favorable.

ModernGL seems to do pretty well compared to C++ and python does very badly, with being up to 470 times slower in the case of glClearColor.
Julia in contrast offers nearly the same speed as calling OpenGL functions from C++ as can be seen in the table.
As all the OpenGL wrappers are pretty mature by now and bind to the same C library (the video driver), this should mainly be a C function call benchmark.
Python performs badly here, but it must be noted that there are a lot of different Python distributions and some promise to have better C interoperability.
As this benchmarks goal is to show that Julia’s ccall interface is comparable to a c function call from inside C++, the python options have not been researched that thoroughly.
From this benchmark can be concluded, that Julia offers a solid basis for an OpenGL wrapper library.

The code and results can be found on github

A bioinformatics walk-through: Accessing protein-protein interaction interfaces for all known protein structures with PDBe PISA

If this summer’s posting became a little infrequent, part of the blame lies with computational research I’ve been working on, regarding the systems biology of chromosomal translocations and the ensuing chimeric proteins at the Medical Research Council Laboratory of Molecular Biology in Cambridge.

A sizeable part of bioinformatics ‘dry lab’ work falls into what has been described in the NYT as ‘data wrangling’ (or the work of a ‘data janitor’). This post is about accessing the data held in the Protein Databank in Europe’s repository of Proteins, Interfaces, Structures and Assemblies (PDBe PISA).

Sent out onto the web to find a source of structural protein-protein interaction data with amino acid-level resolution, my first port of call was the Nucleic Acids Research Molecular Biology Online Database Collection (which I’d read of in the opening chapters of Arthur Lesk’s Introduction to Bioinformatics) where I found a sizeable list of PPI databases.

Not wanting to click through each, I chose to browse this programmatically, using Javascript-automated AJAX requests (effectively asking the website to give me web pages but without displaying them) and just ‘scrape’ what I wanted (full workings here), as follows:

From these results, here’s a little background info on PDBe:

  {
    "name": "PDBe",
    "url": "http://www.ebi.ac.uk/pdbe/",
    "entryurl": "http://www.oxfordjournals.org/nar/database/summary/456",
    "desc": "EMBL-EBI's Protein Data Bank in Europe (PDBe) is the European resource for the collection, organization and dissemination of data about biological macromolecular structures. PDBe is one of four partners in the worldwide Protein Data Bank (wwPDB), the consortium entrusted with the collation, maintenance and distribution of the global repository of macromolecular structure data. PDBe uses a relational database that presents the data derived from the Protein Data Bank (PDB) in a consistent way and allows users to retrieve meaningful data using complex and sophisticated searches including simple textual queries or more complex 3D structure-based queries. PDBe has also developed a number of advanced tools for analysis of macromolecules. The \"Structure Integration with Function, Taxonomy and Sequence\" (SIFTS) initiative integrates data from a number of bioinformatics resources that is used by major global sequence, structure and protein-family resources. Furthermore, PDBe works actively with the X-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy and cryo-Electron Microscopy (EM) communities and is a partner in the Electron Microscopy Data Bank (EMDB). The active involvement with the scientific communities has resulted in improved tools for structure deposition and analysis.",
    "ref": null,
    "absurl": "http://nar.oxfordjournals.org/cgi/content/abstract/42/D1/D285",
    "email": "pdbe@ebi.ac.uk"
  },

Web scraping can feel quite kludgy, and there are doubtless better ways to do the above. Having said that, it’s great for prototyping: you can use Javascript within a web browser console, i.e. without littering your computer with temporary files. What’s more, dedicated communities like the ScraperWiki forum are around to support and develop the associated tools, and in its more elaborate incarnations ‘scraping’ features in journals like Briefings in Bioinformatics (“Web scraping technologies in an API World” was published there just this week).

After having decided on PDBe PISA thanks to my scraped-together report, and finding no guidance on how to tackle the task, I turned to the bioinformatician’s equivalent of [computing/programming Q&A site] Stack Overflow known as Biostars. My question got a grand total of 0 answers(!), so what follows is my approach — which may either be of interest as a peek into the work going under the banner of ‘bioinformatics’ or as a guide to other scientists seeking to access the same information.

First off, a Python script parcelled up a list of every PDB code (the unique identifier to an author-deposited structure from X-ray crystallography, NMR etc.) in PDB into comma-separated chunks of 50, which were stuck onto the end of a web-service query as recommended. The server would process these queries, understood through its “API”: the CGI of cgi-bin in the URL means it’s invoking a script on the server, which in turn expects interfaces.pisa? to be followed by comma-separated PDB codes. Given these expectations, the API will respond in a regular manner each time, enabling reliable scripting.

With over 2000 such queries for interface data (each of them requesting 50 PDB-code-identified structures), this isn’t something you want to be doing manually. It wasn’t clear exactly which PDB entries were needed at the time, so the full complement was downloaded.

This download script just works for one query, putting the received XML in one file – to handle all 2029 queries, a bit of lateral thinking was required. 50 queries (each containing 50 PDB codes) were executed to make up a single interfacesij.xml file, where i is an integer 1 to 4, and likewise j from 1 to 10 (plus a bonus 4-11 to get those final 29). Download scripts (named similarly as getxmlij.py) were written individually by another script — code writing code…

With download scripts written, the task of running each of them consecutively fell to yet another Python script, playing the sound of Super Mario picking up a coin when each file finished downloading, or the Mario pause-game sound upon encountering an error, because I could because clear feedback becomes necessary on something taking days across multiple computers.

Inevitably a minority of the queries failed, and had to be obtained separately.

Once downloaded, various pattern matching text-processing programs were run on the XML from within a shell script — readers unfamiliar with programming may have heard of these this week thanks to the 22 year old security bug(s) being referred to as shellshock. Shell scripts make looping through files in this manner a simple task, and are becoming essential for everyday file manipulation now that I’m a reformed Windows user. For the 41 XML files, a function runprocessor was called, with instructions to:

  1. Split each file successively at every <pisa_interfaces> tag through to the closing </pisa_interfaces> tag, the line numbers of which were stored together in an ordered list (an “array variable”) pisapairs
  2. Write each of these sections to a cache file xmlcache.xml, of suitable size for parsing by a Python XML parser.
  3. Reduce the time spent by the parser by in turn splitting this cache into just the PDB entries in the shortlist of interest with a function extractsubsets
  4. Initiate a Python script to read the entire cachesubset.xml file into memory, and write the pertinent structural data into a report formatted as tab-separated values (TSV). This file is a mere few hundred megabytes compared to the 120 GB grand total for the XML.

Clicking Details for an interface on the list of all interfaces for a given protein structure, e.g. for the only one in spider silk precursor protein spidroin, shows the interfacial residues in yellow:
image

The output threads together all interfacial residues and the associated statistical figures for each on a single line for every interface, but it’s simple enough to separate out each according to commas (then colons) to get a longform residue-per-line output once all XML is processed.

Progress is indicated in terminal output, where the current i and j values are printed followed by the pisapair (i.e. which of the 50 pisa_interfaces tags) is being worked through:

image

As shown in the logfile, there are inevitable errors, such as Entry not found: it’s simple enough to find the difference between the output report file’s list of PDB codes and the input ‘shortlist’, which can be mapped back to the constituent files for any follow-up investigation (the “wrangling” facet of computational science alluded to earlier) since the order of the original 2029 queries is known:

I’m putting these together in a code repository on GitHub, with a disclaimer that it’s not fit for all purposes (for instance if you’re interested in H-bonds, in brown on the PISA website residue table, above).

A lot of this was painfully slow — there’s nothing to be done about the speed of downloading the files, given that its rate is limited by the server. Yes there was a lot of data to get through, but Python’s sluggishness at the final step makes me wonder if I could implement some leaner algorithm, parallelise, etc., but with term recommenced code optimisation on a successfully completed task isn’t top priority. Advice on improvements would be appreciated if you have any.

I’m currently reading Jones & Pevzner’s An Introduction to Bioinformatics Algorithms which gives insight into how you can analyse and improve these types of operations (the book is core reading for a Coursera.org lecture series which kicks off next month), and have been recommended Goldwasser & Tamassia’s Data Structures and Algorithms in Python (a few online resources in a similar vein are available here).

I’ve also been fiddling with Julia, an R-like language with C-like speeds — in a 2012 blog post its creators say they “created Julia, in short, because we are greedy”. Fernando Perez is overseeing its incorporation into IPython Notebooks as ‘Project Jupyter’ and a port of R’s ggplot2 library has recently emerged for Julia under the name of Gadfly (a tutorial IPy NB is up here).

I’m starting a final year undergraduate project as of this week, on mapping small RNA-seq data to miRNAs, under the supervision of the founder of the database central to cataloguing this class of non-coding RNA ‒ super exciting stuff! :¬)

If you’ve got questions on PISA you think I could help with, feel free to ask here, or shoot me an email.

PDBe PISA homepage

✣ Peter Briggs, a scientific programmer at STFC Daresbury Laboratory, has a nice little guide to the service here.