Parallel Computing: Modern Trends in Research, Education, and Application Minitrack

Permanent URI for this collection

Performance growth of computers is nowadays more driven by scaling-out (adding more cores, building grids, ...) rather than scaling-up (e.g. increasing clock rates, adding memory, ...). This is why significant performance gains of computer programs can only be achieved by writing software that is capable of utilizing new parallel infrastructures either in multi-core machines or machine grids.

This minitrack therefore covers latest trends in techniques, algorithms, architectures, infrastructures, processes and so forth that aim at making efficient use of parallel resources.

Anticipated submissions are not limited to but in the scope of the following topics:

  • Programming and language paradigms for parallel system development
  • New ways/best practices to teach parallel programming
  • In-Memory middleware for process communication and alternatives
  • The Reactive Manifesto and its implications on Software Technology
  • Design considerations for parallel system
  • Testing of parallel systems and identification of Race-Conditions
  • Actor-based systems and their advantages for parallel system development
  • Real-time considerations for parallel systems
  • Performance analysis of parallel system development approaches
  • Impact on the software development process
  • Languages designed for parallel computing
  • Parallel system development for the cloud
  • Infrastructures for parallel computing

Minitrack Co-Chairs:

Peter Salhofer (Primary Contact)
FH JOANNEUM - University of Applied Sciences, Austria
Email: peter.salhofer@fh-joanneum.at

Browse

Recent Submissions

Now showing 1 - 4 of 4
  • Item
    Low-latency XPath Query Evaluation on Multi-Core Processors
    ( 2017-01-04) Karsin, Benjamin ; Casanova, Henri ; Lim, Lipyeow
    XML and the XPath querying language have become ubiquitous data and querying standards used in many industrial settings and across the World-Wide Web. The high latency of XPath queries over large XML databases remains a problem for many applications. While this latency could be reduced by parallel execution, issues such as work partitioning, memory contention, and load imbalance may diminish the benefits of parallelization. We propose three parallel XPath query engines: Static Work Partitioning, Work Queue, and Producer- Consumer-Hybrid. All three engines attempt to solve the issue of load imbalance while minimizing sequential execution time and overhead. We analyze their performance on sets of synthetic and real-world datasets. Results obtained on two multi-core platforms show that while load-balancing is easily achieved for most synthetic datasets, real-world datasets prove more challenging. Nevertheless, our Producer-Consumer-Hybrid query engine achieves good results across the board (speedup up to 6.31 on an 8-core platform).
  • Item
    Are Web Applications Ready for Parallelism?
    ( 2017-01-04) Radoi, Cosmin ; Herhut, Stephan ; Sreeram, Jaswanth ; Dig, Danny
    In recent years, web applications have become pervasive. Their backbone is JavaScript, the only programming language supported by all major web browsers. Most browsers run on desktop or mobile devices with parallel hardware. However, JavaScript is by design sequential, and current web applications make little use of hardware parallelism. Are web applications ready to exploit parallel hardware? \ \ To answer this question we take a two-step approach. First, we survey 174 web developers regarding the potential and challenges of using parallelism. Then, we study the performance and computation shape of a set of web applications that are representative for the emerging web. We identify performance bottlenecks and examine memory access patterns to determine possible data parallelism. \ \ Our findings indicate that emerging web applications do have latent data parallelism, and JavaScript developers' programming style are not a significant impediment to exploiting this parallelism.
  • Item
    A Comparison of Task Parallel Frameworks based on Implicit Dependencies in Multi-core Environments
    ( 2017-01-04) Fraguela, Basilio B.
    The larger flexibility that task parallelism offers with respect to data parallelism comes at the cost of a higher complexity due to the variety of tasks and the arbitrary patterns of dependences that they can exhibit. These dependencies should be expressed not only correctly, but optimally, i.e. avoiding over-constraints, in order to obtain the maximum performance from the underlying hardware. There have been many proposals to facilitate this non-trivial task, particularly within the scope of nowadays ubiquitous multi-core architectures. A very interesting family of solutions because of their large scope of application, ease of use and potential performance are those in which the user declares the dependences of each task, and lets the parallel programming framework figure out which are the concrete dependences that appear at runtime and schedule accordingly the parallel tasks. Nevertheless, as far as we know, there are no comparative studies of them that help users identify their relative advantages. In this paper we describe and evaluate four tools of this class discussing the strengths and weaknesses we have found in their use. \
  • Item