Riverbed Technology since May 2012
Sr Director
Riverbed Technology May 2010 - May 2012
Director of Engineering
Riverbed Technology Jun 2007 - May 2010
Manager - R&D (Software)
Riverbed Tech Jul 2003 - Jun 2007
MTS
Network Appliance Jan 2001 - Jul 2003
MTS
Education:
Carnegie Mellon University 1999 - 2000
MS, networking
University of Mumbai 1995 - 1999
BSEE, Engineering
Jai Hind College, Mumbai 1993 - 1995
Skills:
File Systems Distributed Systems Kernel SCSI Kernel Programming Cloud Computing Linux Kernel C Perl Storage
Riverbed Technology since May 2012
Sr Director
Riverbed Technology May 2010 - May 2012
Director of Engineering
Riverbed Technology Jun 2007 - May 2010
Manager - R&D (Software)
Riverbed Tech Jul 2003 - Jun 2007
MTS
Network Appliance Jan 2001 - Jul 2003
MTS
Education:
Carnegie Mellon University 1999 - 2000
MS, networking
University of Mumbai 1995 - 1999
BSEE, Engineering
Jai Hind College, Mumbai 1993 - 1995
Skills:
Storage Cloud Computing Distributed Systems Linux Kernel File Systems Perl Software Engineering C System Architecture Virtualization Kernel Linux Enterprise Software Tcp/Ip Unix Data Center Networking Scalability Device Drivers Kernel Programming Saas Scsi Software As A Service High Availability Software Development C++ Storage Area Networks San Computer Architecture Team Management Cloud Storage Debugging
David Tze-Si Wu - Fremont CA, US Soren Lasen - San Francisco CA, US Nitin Gupta - Fremont CA, US Vivasvat Keswani - Fremont CA, US
Assignee:
Riverbed Technology, Inc. - Sna Francisco CA
International Classification:
H04L 12/28
US Classification:
370392, 709227
Abstract:
Network traffic is monitored and an optimal framing heuristic is automatically determined and applied. Framing heuristics specify different rules for framing network traffic. While a framing heuristic is applied to the network traffic, alternative framing heuristics are speculatively evaluated for the network traffic. The results of these evaluations are used to rank the framing heuristics. The framing heuristic with the best rank is selected for framing subsequent network traffic. Each client/server traffic flow may have a separate framing heuristic. The framing heuristics may be deterministic based on byte count and/or time or based on traffic characteristics that indicate a plausible point for framing to occur. The choice of available framing heuristics may be determined partly by manual configuration, which specifies which framing heuristics are available, and partly by automatic processes, which determine the best framing heuristic to apply to the current network traffic from the set of available framing heuristics.
Throttling Of Predictive Acks In An Accelerated Network Communication System
Kartik Subbanna - Fremont CA, US Nitin Gupta - San Francisco CA, US Daniel Conor O'Sullivan - San Francisco CA, US Shashidhar Merugu - Mountain View CA, US Steven James Procter - San Francisco CA, US Vivasvat Manohar Keswani - San Francisco CA, US
Assignee:
Riverbed Technology, Inc. - San Francisco CA
International Classification:
G06F 15/16
US Classification:
709203, 709202, 709218, 709228, 709232
Abstract:
In a system where transactions are accelerated with asynchronous writes that require acknowledgements, with pre-acknowledging writes at a source of the writes, a destination-side transaction accelerator includes a queue for queue writes to a destination, at least some of the writes being pre-acknowledged by a source-side transaction accelerator prior to the write completing at the destination, a memory for storing a status of a destination-side queue and possibly other determinants, and logic for signaling to the source-side transaction accelerator with instructions to alter pre-acknowledgement rules to hold off on and pursue pre-acknowledgements based on the destination-side queue status. The rules can take into account adjusting the flow of pre-acknowledged requests or pre-acknowledgements at the sender-side transaction accelerator based at least on the computed logical length.
Method And Apparatus For Acceleration By Prefetching Associated Objects
Charles Huang - Palo Alto CA, US Nitin Gupta - Fremont CA, US Vivasvat Keswani - Fremont CA, US Bart Robinson - Richmond CA, US
Assignee:
Riverbed Technology, Inc. - San Francisco CA
International Classification:
G06F 15/173 G06F 15/16 G06F 9/34
US Classification:
709223, 709219, 709246, 711213
Abstract:
Association information is used to build association trees to associate base pages and embedded objects at a proxy. An association tree has a root node containing a URL for a base page, and zero or more leaf nodes each containing a URL for an embedded object. In most cases, an association tree will maintain the invariant that all leaves contain distinct URLs. However, it is also possible to have an association tree in which the same URL appears in multiple nodes. An association tree may optionally contain one or more internal nodes, each of which contains a URL that is an embedded object for some other base page, but which may also be fetched as a base page itself. Given a number of association trees and a base-page URL, a prefetch system finds the root or interior node corresponding to that URL (if any) and traverses the tree from that node, prefetching URLs until the URL of the last leaf node is prefetched. The prefetching starts the process of bringing over the various embedded objects before the user or program would ordinarily fetch them.
Rules-Based Transaction Prefetching Using Connection End-Point Proxies
David Wu - Fremont CA, US Vivasvat Keswani - San Francisco CA, US Case Larsen - Union City CA, US
Assignee:
Riverbed Technology - San Francisco CA
International Classification:
G06F 15/16
US Classification:
709206000
Abstract:
Network proxies reduce server latency in response to series of requests from client applications. Network proxies intercept messages clients and a server. Intercepted client requests are compared with rules. When client requests match a rule, additional request messages are forwarded to the server on behalf of a client application. In response to the additional request messages, the server provides corresponding response messages. A network proxy intercepts and caches the response messages. Subsequent client requests are intercepted by the network application proxy and compared with the cached messages. If a cached response message corresponds with a client request message, the response message is returned to the client application immediately instead of re-requesting the same information from the server. A server-side network proxy can compare client requests with the rules and send additional request messages. The corresponding response messages can be forwarded to a client-side network proxy for caching.
Wan-Optimized Local And Cloud Spanning Deduplicated Storage System
Greg Taleck - San Francisco CA, US Vivasvat Keswani - Fremont CA, US Nitin Parab - Menlo Park CA, US James Mace - San Francisco CA, US
Assignee:
RIVERBED TECHNOLOGY, INC. - San Francisco CA
International Classification:
G06F 17/30
US Classification:
707622, 707692, 707E17005
Abstract:
A spanning storage interface facilitates the use of cloud storage services by storage clients. The spanning storage interface presents one or more data interfaces to storage clients at a network location, such as file, object, data backup, archival, and storage block based interfaces. The data interfaces allows storage clients to store and retrieve data using non-cloud based protocols. The spanning storage interface may perform data deduplication on data received from storage clients. The spanning storage interface may transfer the deduplicated version of the data to the cloud storage service. The spanning storage interface may include local storage for storing a copy or all or a portion of the data from storage clients. The local storage may be used as a local cache of frequently accessed data, which may be stored data in its deduplicated form.
Disaster Recovery Using Local And Cloud Spanning Deduplicated Storage System
Greg Taleck - San Francisco CA, US Vivasvat Keswani - Fremont CA, US Nitin Parab - Menlo Park CA, US James Mace - San Francisco CA, US
Assignee:
RIVERBED TECHNOLOGY, INC. - San Francisco CA
International Classification:
G06F 11/20
US Classification:
714 411, 714E11073
Abstract:
A spanning storage interface facilitates the use of cloud storage services by storage clients and may perform data deduplication. The spanning storage interface may include local storage for caching data from storage clients. A disaster recovery application includes at least first and second spanning storage interfaces at first and second network locations. The second spanning storage interface is provided for at least disaster recovery operations. The second spanning storage interface includes second local storage for improving data access performance. A copy of the local cache of the first spanning storage interface is transferred to the second local storage while the first network location is operating. In the event of a disaster affecting the first network location, the second spanning storage interface can provide data access to the first network location's data with improved performance from using the copy of local cache in the second local storage.
Rules-Based Transactions Prefetching Using Connection End-Point Proxies
David Tze-Si Wu - Fremont CA, US Vivasvat Keswani - San Francisco CA, US Case Larsen - Union City CA, US
Assignee:
Riverbed Technology, Inc. - San Francisco CA
International Classification:
G06F 15/16
US Classification:
709203
Abstract:
Network proxies reduce server latency in response to series of requests from client applications. Network proxies intercept messages clients and a server. Intercepted client requests are compared with rules. When client requests match a rule, additional request messages are forwarded to the server on behalf of a client application. In response to the additional request messages, the server provides corresponding response messages. A network proxy intercepts and caches the response messages. Subsequent client requests are intercepted by the network application proxy and compared with the cached messages. If a cached response message corresponds with a client request message, the response message is returned to the client application immediately instead of re-requesting the same information from the server. A server-side network proxy can compare client requests with the rules and send additional request messages. The corresponding response messages can be forwarded to a client-side network proxy for caching.
David Tze-Si Wu - Fremont CA, US Vivasvat Keswani - Fremont CA, US Nitin Parab - Menlo Park CA, US
Assignee:
RIVERBED TECHNOLOGY, INC. - San Francisco CA
International Classification:
G06F 15/167
US Classification:
709214
Abstract:
The cloud storage services are extended with a cloud storage service access protocol that enables users to specify a desired storage tier for each data stream. In response to receiving storage tier specifiers via the protocol, the cloud storage service performs storage operations to identify target storage devices having attributes matching those associated with the requested storage tier. The cloud storage service stores a data stream from the storage client in the identified target storage device associated with the desired storage tier. Storage tiers can be defined based on criteria including capacity costs; access latency; availability; activation state; bandwidth and/or transfer rates; and data replication. The cloud storage service protocol allows data streams to be transferred between storage tiers, storage devices to be activated or deactivated, and data streams to be prefetched and cached. The cloud storage services may charge storage clients based on storage tier use and associated operations.