Carnegie Mellon University
With the growth of data in recent years, efficiently processing large data has become very important. As physical constraints have largely prevented the improvement in performance of single processors, it has become more important to make use of multiple processors/cores in parallel to solve problems faster and to solve larger problems. Parallel processing has become ubiquitous today, as even many personal computers and smart phones contain multiple cores.
My research is in designing large-scale parallel algorithms and methods for writing efficient parallel programs. I have developed Ligra, a framework for processing large graph data on shared-memory machines, designed fast parallel algorithms for many fundamental algorithms and data structures, and developed tools for writing deterministic parallel programs.
My research helps others take advantage of the parallelism available in today’s computers. I have developed fast parallel algorithms that allow users to process their data on shared-memory machines more efficiently than before. In addition, the framework and tools that I have designed enable users to write and debug their own high-performance parallel programs more easily.
The Facebook Fellowship allowed me to attend conferences around the world, where I had the opportunity to interact and share ideas with many researchers in both academia and industry. My visit to the Facebook headquarters last year allowed me to share my work on large-scale graph processing with researchers at Facebook and also learn about the work on parallel graph processing at Facebook.
Being a Facebook Fellow was a great experience. I appreciated the generous stipend and funding for attending conferences and purchasing computer equipment that Facebook provided. My interaction with Facebook researchers was very fruitful as I gained a better understanding of what kinds of problems were important in the real world.
I plan to continue my research on designing large-scale parallel algorithms, and creating tools and frameworks to make parallel programming easier. This includes exploring methods for dealing with non-determinism in parallel programs, techniques for writing efficient concurrent data structures and designing simple algorithms that are understandable and usable by non-experts. I am also interested in extending my work to heterogeneous architectures and to the distributed-memory setting. I am excited to take part in the effort to address the important challenges in parallel computing.