Matt

Matt Mathis

Email: mathis@psc.edu
Work phone: 412.268.3319
or other contact information.

Point of view: The glass is neither half full nor half empty, it is the wrong size.

For very quick conversations, MathisTCP (on AIM), mathis_15206 (YIM) mathis@psc.edu (jabber), matt.mathis (gmail) might be best, but please introduce yourself.

I am dedicated to the idea that an ordinary user running an ordinary network application on an ordinary workstation should either saturate some workstation bottleneck or completely fill some network link.


As of April 19th, 2010, I will be working at Google. Please update you address books: Matt dot Mathis at gmail.

My dedication (above) remains unchanged. I just have to think about a slightly larger pool of ordinary users.


Research Projects

TCP-unfriendly: New! We want to explore the possibility of readjusting how the network and end-systems balance the responsibility for allocating network capacity. In particular we want to look at the use of some form of Fair Queuing or similar mechanisms in the network, combined with a change to TCP congestion control, to make it easier for the network to more accurately regulate the traffic. In the long run these changes would eliminate the need for the "TCP-friendly" property which is currently required for all transport protocols. Although this might seem like a huge paradigm shift for the Internet, independent forces are already driving the most difficult part of these changes: the vast majority of end systems (home users with DSL, cable and FTTH service) are likely to already have their traffic managed by some form of Fair Queuing. We expect this trend to continue, and as a consequence, we believe that these changes can be deployed incrementally, on an as-needed basis.

We want to explore the merits of this paradigm change to identify and investigate areas that need additional research. Our goal is to introduce a compelling argument that the IETF should further relax its requirement that all protocols be TCP-friendly under all conditions. Even a minor weakening of the IETF position will facilitate the Internet evolving along this path.

A new page has been set up here.

NPAD, Network Path and Application Diagnosis is focused on developing diagnostics that mitigate the effects of "symptom scaling". The "end-to-end network performance debugging" problem is difficult because the only symptom of nearly all flaws is reduced performance. Furthermore that one symptom is scaled by RTT, such that conventional diagnostics typically yield false pass results for local tests. Under the NPAD project we assembled tools and resources to enable people to quickly diagnose and correct the majority flaws that affect users connected to high speed networks. [Currently seeking additional funding.]

MTU is still a huge bottleneck, but this project is on hold.

My current and past papers and presentations are archived here.

Past Research Projects

Web100 was focused on providing sufficient instrumentation within TCP (e.g. a MIB) to support the broad diagnosis of all parts of the application and network. At the minimum it can provide basic information about application performance (does TCP stall waiting for the app?), and detailed information about TCP itself (e.g. buffer utilization) and the network (e.g. RTT, reorder and loss statistics). The final MIB, RFC 4898, is now on the standards track. Furthermore some end-system bottlenecks (e.g. buffer space) can be autotuned [Semke] as a side effect of the improved instrumentation. Web100 autotuning is in Linux and a similar algorithm is present in VISTA. [DONE]

Net100 is focused on applying Web100 to DoE applications. While Web100 must take a purest stance on appropriate TCP enhancements, Net100 can afford to embrace a number of workarounds.[DONE]


How Fast is Fast?

Data Rate Chart

This chart shows the transfer time vs file data size for various data rates. For example at 100 megabits/second (which I consider to be the "baseline" for high speed transfers) it takes slightly less than 2 minutes to transfer a gigabyte and about a day to transfer a terabyte. At 10 gigabits/s, which is now the standard for Internet "trunks", it takes about 1 second to transfer a gigabyte and about 20 minutes to transfer a terabyte. At 1 megabit/s, which is close to DSL or cable rates, (depending on the exact service), it takes several hours for a gigabyte and several months for a terabyte.


Other Interests

I am in the process of moving my personel pages elsewhere. My family is more than slightly complicated. My daughter Emma has made us all very proud

I am past President of the East Suburban Unitarian Universalist Church (ESUUC), in Murrysville PA (a suburb of Pittsburgh, adjacent to Monroeville and Plum).

I am the treasurer of CDN/CDSS which runs weekly contra dances in Pittsburgh.

Evil is defined by mortals who think they know "The Truth" and use force to apply it to others.
          -me.


This page is http://staff.psc.edu/mathis/index.html.

For additional information check out these pages: Pittsburgh Supercomputing Center, Network research at PSC, or Matt Mathis. Please send comments and suggestions to mathis@psc.edu.