Event in series DAMP
Parallelism is going mainstream. Many chip manufactures are turning to multicore processor designs rather than scalar-oriented frequency increases as a way to get performance in their desktop, enterprise, and mobile processors. This endeavor is not likely to succeed long term if mainstream applications cannot be parallelized to take advantage of tens and eventually hundreds of hardware threads. Parallelizing programs is a difficult problem. User specification of parallelism is fraught with pitfalls such as race conditions, non-determinism in thread interactions, a lack of debugging methods, and poorly understood performance consequences. Automatic parallelization of existing imperative languages will not meet the challenge due to dependencies and aliasing. Multicore architectures will differ in significant ways from their multisocket predecessors. For example, the communication to compute bandwidth ratio is likely to be higher, which will positively impact performance. More generally, multicore architectures introduce several new dimensions of variability in both performance guarantees and architectural contracts, such as the memory model, that may not stabilize for several generations of product.
Programs written in functional or logic programming languages, or even in other languages with a controlled use of side effects, can greatly simplify parallel programming. Such declarative programming allows for a deterministic semantics even when the underlying implementation might be highly non-deterministic. In addition to simplifying programming this can simplify debugging and analyzing correctness.
DAMP is a one day workshop seeking to explore ideas in programming language design that will greatly simplify programming for multicore architectures, and more generally for tightly coupled parallel architectures. The emphasis will be on functional and logic programming, but any programming language ideas that aim to raise the level of abstraction are welcome. DAMP seeks to gather together researchers in declarative approaches to parallel programming and to foster cross fertilization across different approaches.
Specific topics include, but are not limited to:
- suitability of functional and logic programming languages to multicore applications;
- run-time issues such as garbage collection or thread scheduling;
- architectural features that may enhance the parallel performance of declarative languages;
- type systems for accurately knowing or limiting dependencies, aliasing, effects, and nonpure features;
- ways of specifying or hinting at parallelism
- ways of specifying or hinting at data placement which abstract away from any details of the machine;
- compiler techniques;
- experiences of and challenges arising from making declarative programming practical;
- technology for debugging parallel programs
- the design and implementation of domain-specific declarative languages for multi-core;
We are looking for short papers (3-5) pages. These can be in the form of a position paper, new ideas, initial results, overview of ongoing research, or even a historical perspective. Submissions are due November 3; notification of acceptance will be sent by December 1; final copies will be due January 5.
Please submit papers by sending them to firstname.lastname@example.org. It is due by midnight PST. The title page should include the title, the authors (with contact information for the primary author), and the abstract. Please submit the paper in PDF. This workshop is a followup of the Declarative Programming Languages for Multicore Architectures 2006 which was held as part of POPL 2006.
 Programm Chair:
Professor Guy Blelloch Department of Computer Science Carnegie Mellon University email@example.com
 Programme Committee
Perry Cheng, IBM T. J. Watson Research Center Gopal Gupta, University of Texas at Dallas Kevin Hammond, University of St Andrews Robert Harper, Carnegie Mellon University Suresh Jagannathan, Purdue University Charles Leiserson, Massachuetts Institute of Technology Christian Lengauer, University of Passau Simon Peyton Jones, Microsoft Research
 General Chair
Neal Glew Intel Corporation Santa Clara, CA, USA firstname.lastname@example.org
Sponsored in part by the National Science Foundation ALADDIN Center, and by Intel Corp..