Pinboard (jm)
https://pinboard.in/u:jm/public/
recent bookmarks from jmPackets-per-second limits in EC22019-04-26T10:26:17+00:00
https://stressgrid.com/blog/pps_limits_in_ec2/
jmBy running these experiments, we determined that each EC2 instance type has a packet-per-second budget. Surprisingly, this budget goes toward the total of incoming and outgoing packets. Even more surprisingly, the same budget gets split between multiple network interfaces, with some additional performance penalty. This last result informs against using multiple network interfaces when tuning the system for higher networking performance.
The maximum budget for m5.metal and m5.24xlarge is 2.2M packets per second. Given that each HTTP transaction takes at least four packets, we can translate this to a maximum of 550k requests per second on the largest m5 instance with Enhanced Networking enabled.
]]>aws ec2 networking pps packets tcp ip benchmarkinghttps://pinboard.in/https://pinboard.in/u:jm/b:94369fd8e8b0/Spammergate: The Fall of an Empire2017-03-06T14:20:20+00:00
https://mackeeper.com/blog/post/339-spammergate-the-fall-of-an-empire
jmIn that screenshot, a RCM co-conspirator describes a technique in which the spammer seeks to open as many connections as possible between themselves and a Gmail server. This is done by purposefully configuring your own machine to send response packets extremely slowly, and in a fragmented manner, while constantly requesting more connections.
Then, when the Gmail server is almost ready to give up and drop all connections, the spammer suddenly sends as many emails as possible through the pile of connection tunnels. The receiving side is then overwhelmed with data and will quickly block the sender, but not before processing a large load of emails.
(via Tony Finch)
]]>via:fanf spam antispam gmail blocklists packets tcp networkinghttps://pinboard.in/https://pinboard.in/u:jm/b:e5c526710904/How both TCP and Ethernet checksums fail2015-10-14T14:10:54+00:00
http://www.evanjones.ca/tcp-and-ethernet-checksums-fail.html
jmAt Twitter, a team had a unusual failure where corrupt data ended up in memcache. The root cause appears to have been a switch that was corrupting packets. Most packets were being dropped and the throughput was much lower than normal, but some were still making it through. The hypothesis is that occasionally the corrupt packets had valid TCP and Ethernet checksums. One "lucky" packet stored corrupt data in memcache. Even after the switch was replaced, the errors continued until the cache was cleared.
YA occurrence of this bug. When it happens, it tends to _really_ screw things up, because it's so rare -- we had monitoring for this in Amazon, and when it occurred, it overwhelmingly occurred due to host-level kernel/libc/RAM issues rather than stuff in the network. Amazon design principles were to add app-level checksumming throughout, which of course catches the lot.]]>networking tcp ip twitter ethernet checksums packets memcachedhttps://pinboard.in/https://pinboard.in/u:jm/b:3f82b6d6833f/How to receive a million packets per second on Linux2015-06-19T22:52:50+00:00
https://blog.cloudflare.com/how-to-receive-a-million-packets/
jm
To sum up, if you want a perfect performance you need to:
Ensure traffic is distributed evenly across many RX queues and SO_REUSEPORT processes. In practice, the load usually is well distributed as long as there are a large number of connections (or flows).
You need to have enough spare CPU capacity to actually pick up the packets from the kernel.
To make the things harder, both RX queues and receiver processes should be on a single NUMA node.
]]>linux networking performance cloudflare packets numa so_reuseport sockets udphttps://pinboard.in/https://pinboard.in/u:jm/b:7c3b21a94945/Chris Baus: TCP_CORK: More than you ever wanted to know2014-09-11T12:30:44+00:00
http://baus.net/on-tcp_cork/
jmEven with buffered streams the application must be able to instruct the OS to forward all pending data when the stream has been flushed for optimal performance. The application does not know where packet boundaries reside, hence buffer flushes might not align on packet boundaries. TCP_CORK can pack data more effectively, because it has direct access to the TCP/IP layer. [..]
If you do use an application buffering and streaming mechanism (as does Apache), I highly recommend applying the TCP_NODELAY socket option which disables Nagle's algorithm. All calls to write() will then result in immediate transfer of data.
]]>networking tcp via:nmaurer performance ip tcp_cork linux syscalls writev tcp_nodelay nagle packetshttps://pinboard.in/https://pinboard.in/u:jm/b:b1cec5bb1d43/The First Few Milliseconds of an HTTPS Connection2014-04-07T20:20:18+00:00
http://www.moserware.com/2009/06/first-few-milliseconds-of-https.html
jmhttps tls ssl security http protocols packets networkinghttps://pinboard.in/https://pinboard.in/u:jm/b:080ecad2fc32/DNS results now being manipulated in Turkey2014-03-31T08:49:08+00:00
https://news.ycombinator.com/item?id=7492000
jmturkey twitter dpi dns opendns google networking filtering surveillance proxying packets udphttps://pinboard.in/https://pinboard.in/u:jm/b:e014bcbe14a6/SSL/TLS overhead2013-06-21T14:43:55+00:00
http://netsekure.org/2010/03/tls-overhead/
jmnetwork tls ssl performance latency speed networking internet security packets tcp handshakehttps://pinboard.in/https://pinboard.in/u:jm/b:9941807de694/