The Lazy Susan...RE: [p2p-hackers] Hard question....
lemonobrien at yahoo.com
Sun Apr 2 04:56:42 UTC 2006
I'm passive aggresive so my algorythms tend to be...X sends 'stream-file' to Y, Y sends chunks of 'file-data' sequenced and sessioned back to X. X stores all 'file-data' as they appear and sends 'resend' when a sequence is missing; Y calulates according to sequence; and sends 'file-data' to X.
Y sends EOF. and X can send close closing the session streaming on Y.
data is read in sequence as a stream.
messages are relayed...i call it lazy cause a resend is only sent when a sequence is determined to be missing. I believe tcp does an 'ack' for each node it traverses.
i can calculate a runing-mean to some elasped-total.
Matthew Kaufman <matthew at matthew.at> wrote:
> With coding, there is no need for a back channel from the
Forward error correction has its place, but it is no excuse for eliminating
the feedback necessary to perform proper congestion control. There are
numerous reasons why protocols which fail to perform congestion control
(including RTP, as used for VOIP) are a bad idea for both the individual
user (end-link saturation, excess queueing, impact on the congestion
management of parallel TCP flows, routers which drop or de-prioritize
nonconforming flows, etc.) and the Internet as a whole (router queueing,
congestion collapse, etc.).
TCP or protocols with TCP-friendly congestion management are mandatory for
bulk transfer of data. TCP is the easy answer. Reimplementing TCP on UDP or
using TFRC on UDP is the not-so-easy answer.
My personal (albeit biased) suggestion is to use amicima's MFP, which gets
you congestion controlled delivery for both reliable *and* unreliable flows,
among many other features.
matthew at matthew.at
p2p-hackers mailing list
p2p-hackers at zgp.org
Here is a web page listing P2P Conferences:
You don't get no juice unless you squeeze
Lemon Obrien, the Third.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the P2p-hackers