Boston University Libraries OpenBU
    JavaScript is disabled for your browser. Some features of this site may not work without it.
    View Item 
    •   OpenBU
    • College of Arts and Sciences
    • Computer Science
    • CAS: Computer Science: Technical Reports
    • View Item
    •   OpenBU
    • College of Arts and Sciences
    • Computer Science
    • CAS: Computer Science: Technical Reports
    • View Item

    Distributed Parallel Computing in Mermera: Mixing Noncoherent Shared Memories

    Thumbnail
    Date Issued
    1996-03-07
    Author(s)
    Heddaya, Abdelsalam
    Sinha, Himanshu
    Share to FacebookShare to TwitterShare by Email
    Export Citation
    Download to BibTex
    Download to EndNote/RefMan (RIS)
    Metadata
    Show full item record
    Permanent Link
    https://hdl.handle.net/2144/1582
    Citation (published version)
    Heddaya, Abdelsalam; Sinha, Himanshu. "Distributed Parallel Computing in Mermera: Mixing Noncoherent Shared Memories", Technical Report BUCS-1996-005, Computer Science Department, Boston University, March 7, 1996. [Available from: http://hdl.handle.net/2144/1582]
    Abstract
    Programmers of parallel processes that communicate through shared globally distributed data structures (DDS) face a difficult choice. Either they must explicitly program DDS management, by partitioning or replicating it over multiple distributed memory modules, or be content with a high latency coherent (sequentially consistent) memory abstraction that hides the DDS' distribution. We present Mermera, a new formalism and system that enable a smooth spectrum of noncoherent shared memory behaviors to coexist between the above two extremes. Our approach allows us to define known noncoherent memories in a new simple way, to identify new memory behaviors, and to characterize generic mixed-behavior computations. The latter are useful for programming using multiple behaviors that complement each others' advantages. On the practical side, we show that the large class of programs that use asynchronous iterative methods (AIM) can run correctly on slow memory, one of the weakest, and hence most efficient and fault-tolerant, noncoherence conditions. An example AIM program to solve linear equations, is developed to illustrate: (1) the need for concurrently mixing memory behaviors, and, (2) the performance gains attainable via noncoherence. Other program classes tolerate weak memory consistency by synchronizing in such a way as to yield executions indistinguishable from coherent ones. AIM computations on noncoherent memory yield noncoherent, yet correct, computations. We report performance data that exemplifies the potential benefits of noncoherence, in terms of raw memory performance, as well as application speed.
    Collections
    • CAS: Computer Science: Technical Reports [585]


    Boston University
    Contact Us | Send Feedback | Help
     

     

    Browse

    All of OpenBUCommunities & CollectionsIssue DateAuthorsTitlesSubjectsThis CollectionIssue DateAuthorsTitlesSubjects

    Deposit Materials

    LoginNon-BU Registration

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Boston University
    Contact Us | Send Feedback | Help