Search

Gal Sne

age ~60

from Ashland, MA

Also known as:
  • Sne Gal

Gal Sne Phones & Addresses

  • Ashland, MA
  • Aberdeen, OH
  • Dover, MA
  • Wellesley, MA
  • Brookline, MA
  • Framingham, MA
  • Newton, MA

Us Patents

  • Method And Apparatus Including A Shared Resource And Multiple Processors Running A Common Control Program Accessing The Shared Resource

    view source
  • US Patent:
    58840559, Mar 16, 1999
  • Filed:
    Nov 27, 1996
  • Appl. No.:
    8/753673
  • Inventors:
    Victor Wai Ner Tung - Shrewsbury MA
    Gal Sne - Wellesley MA
    Stephen Lawrence Scaringella - Natick MA
  • Assignee:
    EMC Corporation - Hopkinton MA
  • International Classification:
    G06F 1300
  • US Classification:
    395307
  • Abstract:
    An integrated cached disk array includes host to global memory (front end) and global memory to disk array (back end) interfaces implemented with dual control processors configured to share substantial resources. The dual processors each access independent control store RAM, but run the same processor independent control program using an implementation that makes the hardware appear identical from both the X and Y processor sides.
  • Redundant Writing Of Data To Cached Storage System

    view source
  • US Patent:
    58902198, Mar 30, 1999
  • Filed:
    Nov 27, 1996
  • Appl. No.:
    8/757214
  • Inventors:
    Stephen Lawrence Scaringella - Natick MA
    Gal Sne - Wellesley MA
    Victor Wai Ner Tung - Shrewsbury MA
  • Assignee:
    EMC Corporation - MA
  • International Classification:
    G06F 1216
  • US Classification:
    711162
  • Abstract:
    An integrated cached disk array includes host to global memory (front end) and global memory to disk array (back end) interfaces implemented with dual control processors configured to share substantial resources. Each control processor is responsible for 2 pipelines and respective Direct Multiple Access (DMA) and Direct Single Access (DSA) pipelines, for Global Memory access. Each processor has its own Memory Data Register (MDR) to support DMA/DSA activity. The dual processors each access independent control store RAM, but run the same processor independent control program using an implementation that makes the hardware appear identical from both the X and Y processor sides. Pipelines are extended to add greater depth by incorporating a prefetch mechanism that permits write data to be put out to transceivers awaiting bus access, while two full buffers of assembled memory data are stored in Dual Port Ram and memory data words are assembled in pipeline gate arrays for passing to DPR. Data prefetch mechanisms are included whereby data is made available to the bus going from Global Memory on read operations, prior to the bus being available for an actual data transfer. Two full buffers of read data are transferred from Global Memory and stored in DPR while data words are disassembled in the pipeline gate array, independent of host activity.
  • High Performance Integrated Cached Storage Device

    view source
  • US Patent:
    58902074, Mar 30, 1999
  • Filed:
    Nov 27, 1996
  • Appl. No.:
    8/757226
  • Inventors:
    Gal Sne - Wellesley MA
    Victor Wai Ner Tung - Shrewsbury MA
    Stephen Lawrence Scaringella - Natick MA
  • Assignee:
    EMC Corporation - Hopkinton MA
  • International Classification:
    G11B 1722
  • US Classification:
    711113
  • Abstract:
    An integrated cached disk array includes host to global memory (front end) and global memory to disk array (back end) interfaces implemented with dual control processors configured to share substantial resources. Each control processor is responsible for 2 pipelines and respective Direct Multiple Access (DMA) and Direct Single Access (DSA) pipelines, for Global Memory access. Each processor has its own Memory Data Register (MDR) to support DMA/DSA activity. The dual processors each access independent control store RAM, but run the same processor independent control program using an implementation that makes the hardware appear identical from both the X and Y processor sides. Pipelines are extended to add greater depth by incorporating a prefetch mechanism that permits write data to be put out to transceivers awaiting bus access, while two full buffers of assembled memory data are stored in Dual Port Ram and memory data words are assembled in pipeline gate arrays for passing to DPR. Data prefetch mechanisms are included whereby data is made available to the bus going from Global Memory on read operations, prior to the bus being available for an actual data transfer. Two full buffers of read data are transferred from Global Memory and stored in DPR while data words are disassembled in the pipeline gate array, independent of host activity.

Get Report for Gal Sne from Ashland, MA, age ~60
Control profile