Pinboard (jm)
https://pinboard.in/u:jm/public/
recent bookmarks from jmAWS Fault Isolation Boundaries2022-11-22T12:30:51+00:00
https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/abstract-and-introduction.html?ck_subscriber_id=512829374
jmaws dependencies uptime reliability iamhttps://pinboard.in/https://pinboard.in/u:jm/b:f17c80e57a39/research!rsc: Our Software Dependency Problem2019-01-24T15:28:19+00:00
https://research.swtch.com/deps
jmThe kind of critical examination of specific dependencies that I outlined in this article is a significant amount of work and remains the exception rather than the rule. But I doubt there are any developers who actually make the effort to do this for every possible new dependency. I have only done a subset of them for a subset of my own dependencies. Most of the time the entirety of the decision is “let’s see what happens.” Too often, anything more than that seems like too much effort.
But the Copay and Equifax attacks are clear warnings of real problems in the way we consume software dependencies today. We should not ignore the warnings. I offer three broad recommendations.
* Recognize the problem. If nothing else, I hope this article has convinced you that there is a problem here worth addressing. We need many people to focus significant effort on solving it.
* Establish best practices for today. We need to establish best practices for managing dependencies using what’s available today. This means working out processes that evaluate, reduce, and track risk, from the original adoption decision through to production use. In fact, just as some engineers specialize in testing, it may be that we need engineers who specialize in managing dependencies.
* Develop better dependency technology for tomorrow. Dependency managers have essentially eliminated the cost of downloading and installing a dependency. Future development effort should focus on reducing the cost of the kind of evaluation and maintenance necessary to use a dependency. For example, package discovery sites might work to find more ways to allow developers to share their findings. Build tools should, at the least, make it easy to run a package’s own tests. More aggressively, build tools and package management systems could also work together to allow package authors to test new changes against all public clients of their APIs. Languages should also provide easy ways to isolate a suspect package.
]]>dependencies software coding workhttps://pinboard.in/https://pinboard.in/u:jm/b:292e160f7c75/FFmpeg, SOX, Pandoc and RSVG for AWS Lambda2019-01-08T16:34:24+00:00
https://serverless.pub/lambda-utility-layers/
jmThe basic AWS Lambda container is quite constrained, and until recently it was relatively difficult to include additional binaries into Lambda functions. Lambda Layers make that easy. A Layer is a common piece of code that is attached to your Lambda runtime in the /opt directory. You can reuse it in many functions, and deploy it only once. Individual functions do not need to include the layer code in their deployment packages, which means that the resulting functions are smaller and deploy faster. For example, at MindMup, we use Pandoc to convert markdown files into Word documents. The actual lambda function code is only a few dozen lines of JavaScript, but before layers, each deployment of the function had to include the whole Pandoc binary, larger than 100 MB. With a layer, we can publish Pandoc only once, so we use significantly less overall space for Lambda function versions. Each code change now requires just a quick redeployment.
]]>serverless lambda dependencies deployment packaging opshttps://pinboard.in/https://pinboard.in/u:jm/b:355dc75951ce/The Tidelift Subscription2018-05-08T23:08:37+00:00
https://blog.tidelift.com/announcing-the-tidelift-subscription
jmThe core idea of the Tidelift Subscription is to pay for “promises about the future” of your software components.
When you incorporate an open source library into your application, you need to know not just that you can use it as-is today, but that it will be kept secure, properly licensed, and well maintained in the future. The Tidelift Subscription creates a direct financial incentive for the individual maintainers of the software stacks you use to follow through on those commitments. Aligning everyone’s interests—professional development teams and maintainers alike.
Critically, the Tidelift Subscriptions for React, Angular, and Vue.js cover not just the core libraries, but the vast set of dependencies and libraries typically used in these stacks. For example, a basic React web application pulls in over 1,000 distinct npm packages as dependencies. The Tidelift Subscription covers that full depth of packages which originate from all parts of the open source community, beyond the handful of core packages published by the React engineering team itself.
]]>tidelift open-source libraries dependencies codinghttps://pinboard.in/https://pinboard.in/u:jm/b:adbbee3bde8f/Developer Experience Lessons Operating a Serverless-like Platform at Netflix2017-07-17T10:48:03+00:00
https://medium.com/netflix-techblog/developer-experience-lessons-operating-a-serverless-like-platform-at-netflix-a8bbd5b899a0
jmserverless dependencies packaging deployment versioning devex netflix developer-experience dev testing staging scriptinghttps://pinboard.in/https://pinboard.in/u:jm/b:a6089d93b40f/Towards true continuous integration – Netflix TechBlog – Medium2017-05-02T10:35:22+00:00
https://medium.com/netflix-techblog/towards-true-continuous-integration-distributed-repositories-and-dependencies-2a2e3108c051
jmUsing the monorepo as our requirements specification, we began exploring alternative approaches to achieving the same benefits. What are the core problems that a monorepo approach strives to solve? Can we develop a solution that works within the confines of a traditional binary integration world, where code is shared? Our approach, while still experimental, can be distilled into three key features:
Publisher feedback — provide the owner of shared code fast feedback as to which of their consumers they just broke, both direct and transitive. Also, allow teams to block releases based on downstream breakages. Currently, our engineering culture puts sole responsibility on consumers to resolve these issues. By giving library owners feedback on the impact they have to the rest of Netflix, we expect them to take on additional responsibility.
Managed source — provide consumers with a means to safely increment library versions automatically as new versions are released. Since we are already testing each new library release against all downstreams, why not bump consumer versions and accelerate version adoption, safely.
Distributed refactoring — provide owners of shared code a means to quickly find and globally refactor consumers of their API. We have started by issuing pull requests en masse to all Git repositories containing a consumer of a particular Java API. We’ve run some early experiments and expect to invest more in this area going forward.
What I find interesting is that Amazon dealt effectively with the first two many years ago, in the form of their "Brazil" build system, and Google do the latter (with Refaster?). It would be amazing to see such a system released into an open source form, but maybe it's just too heavyweight for anyone other than a giant software company on the scale of a Google, Netflix or Amazon.]]>brazil amazon build microservices dependencies coding monorepo netflix google refasterhttps://pinboard.in/https://pinboard.in/u:jm/b:15b95b901f34/Camille Fournier's excellent rant on microservices2016-07-06T20:20:08+00:00
https://medium.com/@skamille/i-do-not-want-to-pick-on-the-author-of-the-original-piece-but-i-see-this-reasoning-as-support-for-d3223c45b67d#.l8i5kweqx
jmI haven’t even gotten into the fact that your microservices are an inter-dependent environment, as much as you may wish otherwise, and one service acting up can cause operational problems for the whole team. Maybe if you have Netflix-scale operational hardening that’s not a problem. Do you? Really? Is that the best place to spend your focus and money right now, all so teams can throw shit against the wall to see if it sticks?
Don’t sell people fantasies. This is not the reality for a mid-sized tech team working in microservices. There are enough valuable components to building out such a system without the fantastical claims of self-organizing teams who build cool hack projects in 2 week sprints that change the business. Microservices don’t make organizational problems disappear due to self-organization. They allow for some additional degrees of team and process independence and force very explicit decoupling, in exchange, there is overall system complexity and overall system coordination overhead. I personally think that’s enough value, especially when you are coming from a monolith that is failing to scale, but this model is not a panacea.
]]>microservices rants camille-fournier architecture decoupling dependencieshttps://pinboard.in/https://pinboard.in/u:jm/b:53184d2f8133/conventional-changelog-atom 502 Bad Gateway · Issue #13284 · npm/npm2016-07-06T14:46:57+00:00
https://github.com/npm/npm/issues/13284
jmnpm fail javascript dependencies codinghttps://pinboard.in/https://pinboard.in/u:jm/b:362a1d383bc8/JitPack2016-04-04T09:53:25+00:00
https://jitpack.io/
jmbuild github java maven gradle dependencies packaging librarieshttps://pinboard.in/https://pinboard.in/u:jm/b:0c48d5efec3f/Javascript libraries and tools should bundle their code2016-03-23T10:53:21+00:00
https://medium.com/@Rich_Harris/how-to-not-break-the-internet-with-this-one-weird-trick-e3e2d57fee28#.l4isp4cxe
jmpackaging omnibus npm webpack rollup dependencies coding javascripthttps://pinboard.in/https://pinboard.in/u:jm/b:2083702b107c/How Facebook avoids failures2015-11-02T16:30:59+00:00
http://queue.acm.org/detail.cfm?ref=rss&id=2839461
jmA "move-fast" mentality does not have to be at odds with reliability. To make these philosophies compatible, Facebook's infrastructure provides safety valves.
This is full of interesting techniques.
* Rapidly deployed configuration changes: Make everybody use a common configuration system; Statically validate configuration changes; Run a canary; Hold on to good configurations; Make it easy to revert.
* Hard dependencies on core services: Cache data from core services. Provide hardened APIs. Run fire drills.
* Increased latency and resource exhaustion: Controlled Delay (based on the anti-bufferbloat CoDel algorithm -- this is really cool); Adaptive LIFO (last-in, first-out) for queue busting; Concurrency Control (essentially a form of circuit breaker).
* Tools that Help Diagnose Failures: High-Density Dashboards with Cubism (horizon charts); What just changed?
* Learning from Failure: the DERP (!) methodology, ]]>ben-maurer facebook reliability algorithms codel circuit-breakers derp failure ops cubism horizon-charts charts dependencies soa microservices uptime deployment configuration change-managementhttps://pinboard.in/https://pinboard.in/u:jm/b:f84c5f60c6e9/Preventing Dependency Chain Attacks in Maven2015-08-14T20:15:57+00:00
http://gary-rowe.com/agilestack/2013/07/03/preventing-dependency-chain-attacks-in-maven/
jmsecurity whitelisting dependencies coding jar maven java jvmhttps://pinboard.in/https://pinboard.in/u:jm/b:cded21345b09/Advantages of Monolithic Version Control2015-08-10T16:52:01+00:00
http://danluu.com/monorepo/
jmmonorepo git mercurial versioning source-control coding dependencieshttps://pinboard.in/https://pinboard.in/u:jm/b:33ea04f52f61/On Ruby2015-04-03T13:47:24+00:00
http://hawkins.io/2015/03/on-ruby/
jmI call out the Honeybadger gem specifically because was the most recent time I'd been bit by a seemingly good thing promoted in the community: monkey patching third party code. Now I don't fault Honeybadger for making their product this way. It provides their customers with direct business value: "just require 'honeybadger' and you're done!" I don't agree with this sort of practice. [....]
I distrust everything [in Ruby] but a small set of libraries I've personally vetted or are authored by people I respect. Why is this important? Without a certain level of scrutiny you will introduce odd and hard to reproduce bugs. This is especially important because Ruby offers you absolutely zero guarantee whatever the state your program is when a given method is dispatched. Constants are not constants. Methods can be redefined at run time. Someone could have written a time sensitive monkey patch to randomly undefined methods from anything in ObjectSpace because they can. This example is so horribly bad that no one should every do, but the programming language allows this. Much worse, this code be arbitrarily inject by some transitive dependency (do you even know what yours are?).
]]>ruby monkey-patching coding reliability bugs dependencies libraries honeybadger sinatrahttps://pinboard.in/https://pinboard.in/u:jm/b:3441f0d0cda9/Gradle Team Perspective on Bazel2015-03-26T22:20:23+00:00
https://gradle.org/gradle-team-perspective-on-bazel/
jmgradle bazel build dependencies compilation coding javahttps://pinboard.in/https://pinboard.in/u:jm/b:c450826b64bd/Travis Brown on Twitter: ".@stuhood walks us through the tiny print of his "most controversial slide". #SFScala"2015-02-18T23:09:24+00:00
https://twitter.com/travisbrown/status/567887549672263680/photo/1
jmmonorepo git repository dependencies libraries codinghttps://pinboard.in/https://pinboard.in/u:jm/b:2f77f3b56ddc/How to take over the computer of any JVM developer2014-07-29T09:14:15+00:00
http://blog.ontoillogical.com/blog/2014/07/28/how-to-take-over-any-java-developer/
jmTo prove how easy [MITM attacking Mavencentral JARs] is to do, I wrote dilettante, a man-in-the-middle proxy that intercepts JARs from maven central and injects malicious code into them. Proxying HTTP traffic through dilettante will backdoor any JARs downloaded from maven central. The backdoored version will retain their functionality, but display a nice message to the user when they use the library.
]]>jars dependencies java build clojure security mitm http proxies backdoors scala maven gradlehttps://pinboard.in/https://pinboard.in/u:jm/b:f51524567408/Luigi2014-07-15T10:46:00+00:00
https://speakerdeck.com/rantav/luigi
jmworkflow orchestration scheduling cron spotify open-source luigi redshift pig hive hadoop emr jobs make dependencieshttps://pinboard.in/https://pinboard.in/u:jm/b:83afb675adb7/Database Migrations Done Right2014-05-08T16:53:32+00:00
http://www.brunton-spall.co.uk/post/2014/05/06/database-migrations-done-right/
jmThe rule is simple. You should never tie database migrations to application deploys or vice versa. By minimising dependencies you enable faster, easier and cleaner deployments.
A solid description of why this is a good idea, from an ex-Guardian dev.]]>migrations database sql mysql postgres deployment ops dependencies loose-couplinghttps://pinboard.in/https://pinboard.in/u:jm/b:2e8db5bfa149/Dan Kaminsky on Heartbleed2014-04-16T14:36:20+00:00
http://dankaminsky.com/2014/04/10/heartbleed/
jmWhen I said that we expected better of OpenSSL, it’s not merely that there’s some sense that security-driven code should be of higher quality. (OpenSSL is legendary for being considered a mess, internally.) It’s that the number of systems that depend on it, and then expose that dependency to the outside world, are considerable. This is security’s largest contributed dependency, but it’s not necessarily the software ecosystem’s largest dependency. Many, maybe even more systems depend on web servers like Apache, nginx, and IIS. We fear vulnerabilities significantly more in libz than libbz2 than libxz, because more servers will decompress untrusted gzip over bzip2 over xz. Vulnerabilities are not always in obvious places – people underestimate just how exposed things like libxml and libcurl and libjpeg are. And as HD Moore showed me some time ago, the embedded space is its own universe of pain, with 90’s bugs covering entire countries.
If we accept that a software dependency becomes Critical Infrastructure at some level of economic dependency, the game becomes identifying those dependencies, and delivering direct technical and even financial support. What are the one million most important lines of code that are reachable by attackers, and least covered by defenders? (The browsers, for example, are very reachable by attackers but actually defended pretty zealously – FFMPEG public is not FFMPEG in Chrome.)
Note that not all code, even in the same project, is equally exposed. It’s tempting to say it’s a needle in a haystack. But I promise you this: Anybody patches Linux/net/ipv4/tcp_input.c (which handles inbound network for Linux), a hundred alerts are fired and many of them are not to individuals anyone would call friendly. One guy, one night, patched OpenSSL. Not enough defenders noticed, and it took Neel Mehta to do something.
]]>development openssl heartbleed ssl security dan-kaminsky infrastructure libraries open-source dependencieshttps://pinboard.in/https://pinboard.in/u:jm/b:eb8682ee36c2/