Cloudformation scripting is terrible because it's based entirely on editing JSON. Ugh. Human UNfriendly.
I'd tried several editors. Amazon's editor (for Cloudformation), online editors, and plain ol' vi.
Props to this guy for extending JSON editing for my friendly goto editor, vim:
https://github.com/elzr/vim-json
Wednesday, December 30, 2015
Wednesday, August 12, 2015
SSH, Yes!
SSH tunneling to the rescue! Yes!
Circumventing some local firewalls with port forwarding :)
Circumventing some local firewalls with port forwarding :)
ssh -N user@box-i-can-access localhost:5432:box-i-cant-access:5432
Thursday, July 16, 2015
IE9 versus PCI compliance
Nerd speak: Starting July 2016, HTTPS communication of confidential information must be negotiated as TLSv1.1+ for PCI DSSv3.1 compliance.
Layman speak: Your Grandma's computer won't be able to buy stuff from Etsy. (This effectively means the web browser that first shipped in 2011 with Windows Vista, IE9, will no longer be supported for credit card payments.)
Sincerely,
Your Paranoid Computer Nerds
Migrating_from_SSL_Early_TLS_Information Supplement
Layman speak: Your Grandma's computer won't be able to buy stuff from Etsy. (This effectively means the web browser that first shipped in 2011 with Windows Vista, IE9, will no longer be supported for credit card payments.)
Sincerely,
Your Paranoid Computer Nerds
Migrating_from_SSL_Early_TLS_Information Supplement
Thursday, July 9, 2015
HTTPS: What Nobody Told Us
I burned one too many hours troubleshooting an HTTPS issue and decided to share lessons learned. Both programmatic no-no's and TLS details that nobody told us.
Side rant: who has time for audits with little security justification? I've no patience for security "gurus" who cannot run a simple scan to verify their worst nightmare or best dream:
IOException when getting the response content input stream javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
The Audit
This story started when some auditor got in a frenzy that TLSv1.0 was allowed for public HTTPS communication with a customer's web application. That auditor demanded that only TLSv1.1 or v1.2 be allowed despite version 1.0's problem being isolated to weak CBC and RC4 ciphers (aka. the infamous BEAST attack) -- ciphers that we already weren't allowing.Side rant: who has time for audits with little security justification? I've no patience for security "gurus" who cannot run a simple scan to verify their worst nightmare or best dream:
justin:tmp jpittman$ nmap --script ssl-enum-ciphers $HOSTNAME
Starting Nmap 6.40-2 ( http://nmap.org ) at 2015-07-08 11:12 CDT
...
PORT STATE SERVICE
80/tcp open http
443/tcp open https
| ssl-enum-ciphers:
| SSLv3: No supported ciphers found
| TLSv1.0:
| ciphers:
...
OK fine, assuming the auditor's request is legit, I disable TLSv1.0 at load balancers terminating HTTPS ... and the webapp breaks.
For better disclosure, I should clarify secure communication in this particular design. This was a typical, 3-tier application architecture that was designed so that a layer of load balancers would proxy requests between the set of front-end tiers, aka. on behalf of web servers and app servers, yet there was also intra-network communication between application servers that hosted different apps that these load balancers also proxied. This design simplified HTTPS termination because only the load balancers acting as proxies needed to have their secure certificates managed, however it did complicate the idea of one app "server" making a client call to another app server within the same network. Also, these deployed webapps were all Java based -- but programming language really only matters for details in implementation.
The Error
When I turned off TLSv1.0 on all the load balancers, one of the deployed Java apps acting as a web client started throwing errors about SSL handshaking, like this:
IOException when getting the response content input stream javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
Here's the first part of the story that nobody told us: misnomers. This error makes it sound like an SSL protocol is failing when, as I'll show later, it is a TLS protocol failure. And this misnomer isn't particular to Java. I checked Python, Ruby, and PHP only to find that they too negotiate TLS protocols via inaccurately named "SSL" libraries or methods. Sure, you can make a defense for using the term "SSL/TLS" in documentation but what a way to conjure up a red herring. I wasted cycles thinking SSL protocols were still in play when they never were! Shame on you!
A Secure Socket
By digging into Java code samples, I found some ideal tests for uncovering the root cause of this ill-named handshake error:
SSLContext context = SSLContext.getInstance("TLSv1.2");
Now I'm no Java programmer but this bit of code is fairly simple. It says the SSL/TLS context, so in this case a client call, will be gotten with a parameter that sets the TLSv1.2 protocol. That client context creates a socket to some server -- hence the acronym Secure Socket Layer (SSL) of the original, now inaptly named class SSLContext. When I tested this code with the parameter TLSv1.2 the proxied connection to the problematic server worked, but when set to TLSv1.0 the connection failed with the above SSL handshake error. Ahah!
Some good ol' TL;DR documentation verified default Java behavior that would explain the errors too. This case used Oracle's Hotspot JVM and luckily that vendor's documentation is usually verbose, if not also cryptic. I read Oracle's rather lengthy reference guide to Java Secure Socket Extension (JSSE) that covered both those SSLContext and HttpsURLConnection classes. First off, picking the correct version of the documentation avoided some false fixes. Java 8 fixes didn't apply for a case of Java 7, as this was. Next, the mode of the JVM as client versus server altered its default behavior. Oracle said JVM clients enable a different set of protocols and versions than those in server mode. Also, the documented samples set SSLContext to "TLS" inline -- which I would assume could mean any version of TLS -- yet the documentation clearly says that "TLS" means "TLSv1.0", excluding v1.1 or v1.2. If a lazy programmer didn't read that documentation yet borrowed its code samples, then she would have actually hardcoded the client to TLSv1.0 versions. Finally, the SSL handshake error would occur if the web client used an SSLContext method that set buildtime configurations of the protocol.
So hardcoded bug or default behavior?
In this code, an HTTPS connection is created to a URL of some server, aka. a classic web client. Yet this HttpsURLConnection method doesn't specify the protocol or version like the previous spinnet of code. I had read that HttpsURLConnection honors the JVM runtime option https.protocols to change protocols and versions. Here is where runtime versus buildtime components revealed themselves as part of the problem. By testing the means to setup a secure web client via two different methods -- socket versus URL -- it became dramatically clear that I was probably dealing with a classic, hardcoded bug.
I reconfigured the runtime of this HTTPS connection to use TLSv1.1 or v1.2 and my Java test client worked!
So nerds beware. You may need to go down the rabbit hole far beyond the typical diagrams with labels like "SSL Handshake".
http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html
https://blogs.oracle.com/java-platform-group/entry/diagnosing_tls_ssl_and_https
http://www.oracle.com/technetwork/java/javase/documentation/cve-2014-3566-2342133.html
http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html
TL;DR
Some good ol' TL;DR documentation verified default Java behavior that would explain the errors too. This case used Oracle's Hotspot JVM and luckily that vendor's documentation is usually verbose, if not also cryptic. I read Oracle's rather lengthy reference guide to Java Secure Socket Extension (JSSE) that covered both those SSLContext and HttpsURLConnection classes. First off, picking the correct version of the documentation avoided some false fixes. Java 8 fixes didn't apply for a case of Java 7, as this was. Next, the mode of the JVM as client versus server altered its default behavior. Oracle said JVM clients enable a different set of protocols and versions than those in server mode. Also, the documented samples set SSLContext to "TLS" inline -- which I would assume could mean any version of TLS -- yet the documentation clearly says that "TLS" means "TLSv1.0", excluding v1.1 or v1.2. If a lazy programmer didn't read that documentation yet borrowed its code samples, then she would have actually hardcoded the client to TLSv1.0 versions. Finally, the SSL handshake error would occur if the web client used an SSLContext method that set buildtime configurations of the protocol.
So hardcoded bug or default behavior?
A Web Client
A similar bit of Java code shed more light on the common world of this web client problem:
url = new URL(https_url);
HttpsURLConnection con = (HttpsURLConnection)url.openConnection();
In this code, an HTTPS connection is created to a URL of some server, aka. a classic web client. Yet this HttpsURLConnection method doesn't specify the protocol or version like the previous spinnet of code. I had read that HttpsURLConnection honors the JVM runtime option https.protocols to change protocols and versions. Here is where runtime versus buildtime components revealed themselves as part of the problem. By testing the means to setup a secure web client via two different methods -- socket versus URL -- it became dramatically clear that I was probably dealing with a classic, hardcoded bug.
I reconfigured the runtime of this HTTPS connection to use TLSv1.1 or v1.2 and my Java test client worked!
Hardcoded / Buildtime Configuration
A developer confirmed the root cause was indeed a hardcoded setting of the protocol and version when he changed the problematic webapp client's SSLContext parameter to "TLSv1.2", rebuilt, redeployed. Although the fix was simple, I had burned too many hours troubleshooting -- what was essentially -- a hardcoded / buildtime problem that could not be trivially fixed with runtime changes. I had followed some red herrings while searching for the root cause of this seemingly simple SSL/TLS change, including:- Server/Client Certificates
- Certificate Authority chains
- Firewalls
So nerds beware. You may need to go down the rabbit hole far beyond the typical diagrams with labels like "SSL Handshake".
References
https://tersesystems.com/2014/01/13/fixing-the-most-dangerous-code-in-the-world/http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html
https://blogs.oracle.com/java-platform-group/entry/diagnosing_tls_ssl_and_https
http://www.oracle.com/technetwork/java/javase/documentation/cve-2014-3566-2342133.html
http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html
Tuesday, June 2, 2015
Security: Unexpected Terminal Event
Today I killed a production website by pointing a webapp vulnerability scanner at it. The unexpected stress brought the appserver to its knees while revealing some holes. Luckily my more tactical colleagues came to the rescue and had the website back up in minutes -- after I had killed the scan -- but this opened two cans of worms.
An obvious need for the design and implementation of 1) stress/load testing, and 2) vulnerability/penetration testing in regular operations. These (2) needs are usually ad hoc. Both need to be part of processes in certification for release.
On another note, I was using the awesome open source tool called w3af. :)
An obvious need for the design and implementation of 1) stress/load testing, and 2) vulnerability/penetration testing in regular operations. These (2) needs are usually ad hoc. Both need to be part of processes in certification for release.
On another note, I was using the awesome open source tool called w3af. :)
Security: HTTPS Utilities
I was going to talk about recent hacks against "secure" web communication, aka. HTTPS (HeartBleed, POODLE, Beast, etc.) but that is a bloated topic. Instead, I'm just going to demo 3 invaluable utilities for techies specific to TLS/SSL, and show how Amazon makes managing HTTPS so simple that technologists have even more reason to be lazy.
3 HTTPS utilities
That's really it. These utilities have been around for years so there's nothing new here. A few examples will demonstrate their utility in verification, identification, and negotiation of HTTPS communication.Identify all secure communication options serviced by website:
$ nmap --script ssl-enum-ciphers www.httpvshttps.com
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
| ssl-enum-ciphers:
| SSLv3: No supported ciphers found
| TLSv1.0:
| ciphers:
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA - strong
...
Verify secure communication options negotiable by client, specifically the highest TLS version with RSA authentication and keys, and high AES encryption:
$ openssl ciphers -v | grep 'TLSv1.2' | grep 'Kx=RSA' | grep 'Au=RSA' | grep 'Enc=AES(256)'
AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256
Negotiate a pre-defined secure communication to simulate a web client:
$ curl -v --location --tlsv1 --ciphers AES256-SHA https://www.httpvshttps.com
Amazon HTTPS & ELBs
AWS ELBs (Elastic Load Balancers) make managing HTTPS so simple. These load balancers can be setup with user-defined certificates to terminate secure communication to a website, and are deployed with either pre-defined or user-defined security policies. When a customer asked me to disable TLS v1.0, I simply change their ELB security policy by removing all the ciphers available for that kind of secure negotiation via a checkbox:
Of course, no good technologist trusts a GUI so I verified that change by using the 3 utilities above :)
Wednesday, May 27, 2015
Security: Hactivism
Imperva released a whitepaper detailing an "activist" hack that mimics many of the means I'd outlined in dissecting the Sony attack last year. Not coincidence.
Anatomy of a Hacktivist Attack
Anatomy of a Hacktivist Attack
Wednesday, May 13, 2015
Internet: Authoritarian Machine
The "Internet" is, fundamentally, authoritarian but appears unfiltered and inclusive on the surface. I don't say that with any malice. Your Internet service provider, for example, wants your money so long as you aren't a hot potatoe.
What I mean by authoritarian is how the "Internet", a rather loaded term, can be heavy handed in squelching participation that inhibits its normal operation or societal rules. A remarkable example of this is the infamous Dark Net, and a more poignant example is China (which I'll come back to). Hosting providers are "blind" enablers of a myriad of online activities, including those considered amoral ... Avenue Q's character Trekkie Monster humors us into realizing this ..., but sometimes ISPs can't be bothered to enforce societal rules until a participant gets themselves caught or disturbs normal operations. Once they've got a hot potato on their hands, any Internet provider is going to pass on the buck. This explains all the liability crap that we sign off on when buying an Internet service. I don't mean to imply you or I or anyone is doing anything illegal with the Dark Net example, but I'm saying something as benign as free speech is actually a societal characteristic of the US. It's up to participants of the Internet to follow societal rules lest an authoritarian machine works against us.
The Internet has not created, despite popular belief, any (direct) relief to societal problems and is, at least technologically speaking, highly conformist. Those technologies that form the Internet, so to say, are a myriad of protocols and standards created (or "authorized for use") by various quasi-government committees (IETF, IANA, ICANN, W3C, ISO, etc.) and those committee members seek conformity, whether for good or bad. Even technologists use telling terms that allude to the Internet's style of governance, like "Certificate Authority" and "authoritative name server" (despite preying on people's idealizations with mumbo jumbo like "Web of trust"). That conformist characteristic is a fundamental misunderstanding overlooked by Internet free speech and civil rights advocates. Any variance from standardized or expected means of communication and connectivity is both technologically prohibited and authoritatively regulated via the Internet. As long as Internet participants do not undermine the operation of those technologies (i.e. writing a deluge of spam or flooding a website with noise) then you're OK, otherwise folks are labelled a hot potato, aka. hacker, get their connection blocked, and maybe their "private" information sent off to law enforcement.
Mainland China stands out as a poignant example of what I mean. It's not like the "Chinese" Internet is different than our own. It's the same technologies (HTML, HTTP, TCP, IP, OSPF, etc.) applied in an overtly authoritarian way to enforce a different set of societal rules than in the US. If we were Chinese, those Internet technologies would enable you to create an inflammatory website with Mandarin characters, encoded in a standard format (UTF-8), but your local Communist mayor would shut down your site and block your access to facebook.com. That's a characteristic that is in opposition to the folksy (mis)perception, especially in Western countries, of the Internet being inclusive and unfiltered. And, of course, there's always the NSA in the U.S.A.
In some ways the Internet reminds me of the Matrix: when there's a bad apple, like Neo was, it just throws the apple away. That ability to squelch free speech and infringe civil rights (if those exist in a society) are merely authoritarian actions enabled by the Machine that is the "Internet". Thankfully, the above mentioned quasi-government committees that affect Internet technologies have usually taken a hands-off approach to societal issues, so we have a ton of free speech happening on top of Internet service provided in the US. Town hall debates still happen but now they are flame wars in forums -- hopefully encoded in UTF-8 but probably in ISO-8859-1 since we're in the West and largely forget the rest of the world :)
What I mean by authoritarian is how the "Internet", a rather loaded term, can be heavy handed in squelching participation that inhibits its normal operation or societal rules. A remarkable example of this is the infamous Dark Net, and a more poignant example is China (which I'll come back to). Hosting providers are "blind" enablers of a myriad of online activities, including those considered amoral ... Avenue Q's character Trekkie Monster humors us into realizing this ..., but sometimes ISPs can't be bothered to enforce societal rules until a participant gets themselves caught or disturbs normal operations. Once they've got a hot potato on their hands, any Internet provider is going to pass on the buck. This explains all the liability crap that we sign off on when buying an Internet service. I don't mean to imply you or I or anyone is doing anything illegal with the Dark Net example, but I'm saying something as benign as free speech is actually a societal characteristic of the US. It's up to participants of the Internet to follow societal rules lest an authoritarian machine works against us.
The Internet has not created, despite popular belief, any (direct) relief to societal problems and is, at least technologically speaking, highly conformist. Those technologies that form the Internet, so to say, are a myriad of protocols and standards created (or "authorized for use") by various quasi-government committees (IETF, IANA, ICANN, W3C, ISO, etc.) and those committee members seek conformity, whether for good or bad. Even technologists use telling terms that allude to the Internet's style of governance, like "Certificate Authority" and "authoritative name server" (despite preying on people's idealizations with mumbo jumbo like "Web of trust"). That conformist characteristic is a fundamental misunderstanding overlooked by Internet free speech and civil rights advocates. Any variance from standardized or expected means of communication and connectivity is both technologically prohibited and authoritatively regulated via the Internet. As long as Internet participants do not undermine the operation of those technologies (i.e. writing a deluge of spam or flooding a website with noise) then you're OK, otherwise folks are labelled a hot potato, aka. hacker, get their connection blocked, and maybe their "private" information sent off to law enforcement.
Mainland China stands out as a poignant example of what I mean. It's not like the "Chinese" Internet is different than our own. It's the same technologies (HTML, HTTP, TCP, IP, OSPF, etc.) applied in an overtly authoritarian way to enforce a different set of societal rules than in the US. If we were Chinese, those Internet technologies would enable you to create an inflammatory website with Mandarin characters, encoded in a standard format (UTF-8), but your local Communist mayor would shut down your site and block your access to facebook.com. That's a characteristic that is in opposition to the folksy (mis)perception, especially in Western countries, of the Internet being inclusive and unfiltered. And, of course, there's always the NSA in the U.S.A.
In some ways the Internet reminds me of the Matrix: when there's a bad apple, like Neo was, it just throws the apple away. That ability to squelch free speech and infringe civil rights (if those exist in a society) are merely authoritarian actions enabled by the Machine that is the "Internet". Thankfully, the above mentioned quasi-government committees that affect Internet technologies have usually taken a hands-off approach to societal issues, so we have a ton of free speech happening on top of Internet service provided in the US. Town hall debates still happen but now they are flame wars in forums -- hopefully encoded in UTF-8 but probably in ISO-8859-1 since we're in the West and largely forget the rest of the world :)
Labels:
misperceptions,
society,
standardization,
standards,
theory,
trends,
web
Tuesday, May 12, 2015
Security: Hacking Wordpress/PHP, Ruby Comparisons, and Lessons Learned
Another US Cert alert went out for Wordpress last week. I'm not shocked. I call Wordpress one of the biggest hacker honeypots around because it keeps popping up in cybersecurity news. Yet, it's not fair to be subjective. To be more objective, I'll take a look at verified software vulnerabilities to assess what's happening under the hood.
I went back to MITRE's CVE database again (NIST's mirror isn't as user friendly) to compare how many vulnerabilities have been occurring in Wordpress and PHP -- the core language behind Wordpress -- versus another popular web development language, Ruby (and Ruby on Rails). See my previous post on CVEs and preventing software vulnerabilities for using these databases. Anyways, a simplistic comparison of all vulnerabilities found in Ruby versus PHP is staggering:
CVE Total Counts
$ curl -s https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=ruby | grep "CVE" | wc -l
288
$ curl -s https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=php | grep "CVE" | wc -l
5812
For the studious, since Ruby doesn't really have a good equivalent to Wordpress, it's fairer to see that Ruby on Rails CVE counts are not on the same scale as PHP. Here's their CVE dumps of those too:
$ curl -s https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rails | grep CVE | wc -l
122
$ curl -s https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=wordpress | grep CVE | wc -l
951
It should be noted -- especially for the skeptical -- that PHP and Ruby have some relevant differences. PHP version 1.0 was released in June 1995 while Ruby 1.0 was released December 1996, but PHP is only slightly older than Ruby. The CVE dumps above are for all known time, so I don't think their age explains the difference in quantity (and the CVE databases only go back to the early 2000s anyways). I chose to compare these two languages instead of solutions built on top of them, like Wordpress, because they are both popular, both interpreted languages, and both end on the front line of cybersecurity -- running public websites.
A coworker sent me over to BuiltWith for an unrelated comparison of website softwares so I thought it would be a good tool to dig deeper on the software adoption being related to quantities of vulnerabilities. BuildWith is a service that scrapes online footprints to determine which softwares run public websites -- everything from operating systems to content management systems like Wordpress. It's pretty nifty. I asked BuildWith what the adoption rates look like for PHP and Ruby on Rails (there was no pure Ruby data, which makes sense for BuiltWith's purpose):
Framework Usage
Essentially, PHP pops up 30 more times in websites than Ruby but PHP usage is essentially stable while Ruby's is on the rise; so it's worth another look at whether the frequency of vulnerabilities is related to usage or adoption. Here's vulnerabilities found over time and basic trends in each language:
CVE Counts per Year
* note Google Slide FORECAST uses a slightly different model than SLOPE, so calling that out.
Contrast the vulnerabilities trend in Ruby -- a decreasing rate of occurrence -- with the adoption rate of Ruby according to BuiltWith -- a ~33% annual growth. Of course, there's many reasons for finding vulnerabilities, like a large adoption may mean more hackers want to target a larger victim population, or popularity could drive up the amount of bugs in an effort to meet many (possibly insecure) feature requests. To play Devil's Advocate, PHP does indeed have a decreasing rate of vulnerabilities, but it isn't being adopted more than Ruby. So does a 20-fold difference in vulnerabilities between PHP and Ruby get explained away by their sheer online presence?
Instances and rates of vulnerabilities don't capture the severity of each vulnerability, or the ease of a hacker exploiting the software, or the laziness of programmers. (CVSS does focus our attention, though.) Also, cybersecurity news only gives us superficial recommendations like "security patch available, update now!" instead of digging deeper into buggy software. At most, we may read that a "XSS" or "SQL injection" vulnerability was found -- as if those phrases invoke some kind of meaning into our decision to use one software over another. I decided to dive deeper into one of these XSS & SQL injection Wordpress bugs, similar to one that caused the US Cert issued last week, and found some disturbing practices in PHP programming and some ignored best practices.
One recent vulnerability in Wordpress came down to programmers being too lazy to scrub data input. Something as seemingly benign as a Wordpress forum was exploited by submitting HTML into the comment field; and because Javascript can be embedded into any valid HTML data stream, the PHP server parsed and rendered back to the web browser client whatever had been submitted as a comment. The marvel of this hack is that Wordpress comment moderators are often Wordpress administrators who have logged in with unlimited access to the site, so malicious Javascript embedded in a comment field on their web browsers would gain unauthorized, elevated privileges to execute against the entire website. The studious guy who found this vulnerability made a horrific demo of Javascript uploading content to the PHP server without the moderator's/administrator's knowledge. That demo leverages cross-site scripting (XSS) to upload the content but it could have injected malicious SQL into Wordpress.
I've respected the programmer's motto of being lazy and keeping things simple but after seeing this hack I wonder: have we become too lazy? Apparently the Wordpress comment functionality above wasn't unknown to developers. Wordpress designed the comment form this way as a feature -- Wordpress users wanted to "texturize" comments with options like italicized fonts, embedded hyperlinks, etc. so Wordpress developers enabled HTML parsing of comments by the PHP engine. They were actually doing a kind of data filtering of the comment field but did not thoroughly sanitize it!
In the first chapter of his concise work PHP Security, Shiflett's rationale for bringing up thorough data scrubbing at the very start of his book harkens back to the CVE reports I've cited before:
A bit of Googling on problems with PHP programmers filtering data returned some disturbing practices. For example, PHP had come up with a global requirement for quoted data to be escaped to prevent SQL injections but Shiflett notes that in reality this caused complications that encouraged programmers to fall back to merely stripping data of quotes or slashes instead of checking for valid data. When this global quoting requirement troubled one PHP programmer, not a single suggestion included either checking for valid data or using best practices in filtering data. Shiflett suggests using standard data sanitation functions in PHP, including htmlentities() for interacting with front-end data (and its exhaustive ENT_QUOTES parameter), mysql_real_escape_string() for back-end data, etc. Why were these functions not included in answers for the PHP programmer having issues with quoted data?
To go back to the above case of the Wordpress vulnerability, what constitutes "thorough" data sanitization? In the most strict sense, the Wordpress fix required not allowing HTML input from the user that hadn't been predefined. To keep that feature in tact, Wordpress developers had done data filtering that only allowed a subset of HTML to be valid input, but that kind of sanitation was undermined by forgetting another data sanitization process: not checking for valid data size. The hack demo'ed above, after all, leveraged a very lengthy comment field. So the vulnerability was incomplete or superficial data sanitation. If I extrapolate these mishaps to web application security for Ruby, then there isn't a magic pill. Ruby or Rails might have more programmatic means for data sanitation but the same fundamental process applies. (I'll leave Ruby data sanitization for a follow-up blog.) Generally speaking then, I don't buy the excuse that vulnerabilities are merely a lack of forethought. It gets back to how we program.
One of my coworkers boiled this entire blog down to "bad coding" but I think that is laziness itself speaking on our behalves. The takeaway I got from writing about this XSS and SQL injection bug is the latent, perennial problem of feature requests over best practices. Allowing texturized comments was valued more than allowing thorough data sanitization. In the PHP / Wordpress examples above, data sanitation was needed for:
More generally, the development lifecycle would include systematic means for exhausting the scrubbing of data input and output.
Programmers have a means for various testing, and can include a good kind of testing in-flight with unit tests. For every input or output operation, we should iterate over data that abuses the interface to determine whether more data sanitation is needed. This leads to one characteristic of Ruby and PHP where the languages differ. Ruby includes a standard, unit test out-of-the-box whereas PHP programmers must choose and install one of many frameworks to start doing them.
These lessons learned aren't news. Even good Ruby web applications will need to be written with the idea of systematically preventing bugs and weaknesses by exhaustively testing data sanitation. MITRE's common weaknesses database is blatantly sarcastic about their best practices being ignored by programmers, as attested by their reports. (See CWE-79, and CWE-434). To be fair, MITRE's recommendations can be harsh, like not trusting any input, even a PNG sourced by HTML, so some of the lessons boil down to classic security debates about functionality. And to be honest, I haven't typically tested for unexpected data, have only done the minimum in leveraging programmatic data filtering, and I've never included unit tests in my own programs. These are all recommendations I make knowing that being lazy isn't an excuse. What is left unknown is whether PHP programmers are more lazy than Ruby programmers. :)
Part 1: Lots of Vulnerabilities
I went back to MITRE's CVE database again (NIST's mirror isn't as user friendly) to compare how many vulnerabilities have been occurring in Wordpress and PHP -- the core language behind Wordpress -- versus another popular web development language, Ruby (and Ruby on Rails). See my previous post on CVEs and preventing software vulnerabilities for using these databases. Anyways, a simplistic comparison of all vulnerabilities found in Ruby versus PHP is staggering:
CVE Total Counts
$ curl -s https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=ruby | grep "CVE" | wc -l
288
$ curl -s https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=php | grep "CVE" | wc -l
5812
For the studious, since Ruby doesn't really have a good equivalent to Wordpress, it's fairer to see that Ruby on Rails CVE counts are not on the same scale as PHP. Here's their CVE dumps of those too:
$ curl -s https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rails | grep CVE | wc -l
122
$ curl -s https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=wordpress | grep CVE | wc -l
951
It should be noted -- especially for the skeptical -- that PHP and Ruby have some relevant differences. PHP version 1.0 was released in June 1995 while Ruby 1.0 was released December 1996, but PHP is only slightly older than Ruby. The CVE dumps above are for all known time, so I don't think their age explains the difference in quantity (and the CVE databases only go back to the early 2000s anyways). I chose to compare these two languages instead of solutions built on top of them, like Wordpress, because they are both popular, both interpreted languages, and both end on the front line of cybersecurity -- running public websites.
Adoption and Presence
A coworker sent me over to BuiltWith for an unrelated comparison of website softwares so I thought it would be a good tool to dig deeper on the software adoption being related to quantities of vulnerabilities. BuildWith is a service that scrapes online footprints to determine which softwares run public websites -- everything from operating systems to content management systems like Wordpress. It's pretty nifty. I asked BuildWith what the adoption rates look like for PHP and Ruby on Rails (there was no pure Ruby data, which makes sense for BuiltWith's purpose):
Framework Usage
Essentially, PHP pops up 30 more times in websites than Ruby but PHP usage is essentially stable while Ruby's is on the rise; so it's worth another look at whether the frequency of vulnerabilities is related to usage or adoption. Here's vulnerabilities found over time and basic trends in each language:
CVE Counts per Year
Language | 2012 | 2013 | 2014 | 2015 Forecast | Linear Regress |
PHP | 171 | 127 | 150 | 128.3333333 | -10.5 |
Ruby | 33 | 75 | 27 | 39 | -3 |
Contrast the vulnerabilities trend in Ruby -- a decreasing rate of occurrence -- with the adoption rate of Ruby according to BuiltWith -- a ~33% annual growth. Of course, there's many reasons for finding vulnerabilities, like a large adoption may mean more hackers want to target a larger victim population, or popularity could drive up the amount of bugs in an effort to meet many (possibly insecure) feature requests. To play Devil's Advocate, PHP does indeed have a decreasing rate of vulnerabilities, but it isn't being adopted more than Ruby. So does a 20-fold difference in vulnerabilities between PHP and Ruby get explained away by their sheer online presence?
Part 2: Writing Bugs
Instances and rates of vulnerabilities don't capture the severity of each vulnerability, or the ease of a hacker exploiting the software, or the laziness of programmers. (CVSS does focus our attention, though.) Also, cybersecurity news only gives us superficial recommendations like "security patch available, update now!" instead of digging deeper into buggy software. At most, we may read that a "XSS" or "SQL injection" vulnerability was found -- as if those phrases invoke some kind of meaning into our decision to use one software over another. I decided to dive deeper into one of these XSS & SQL injection Wordpress bugs, similar to one that caused the US Cert issued last week, and found some disturbing practices in PHP programming and some ignored best practices.
One recent vulnerability in Wordpress came down to programmers being too lazy to scrub data input. Something as seemingly benign as a Wordpress forum was exploited by submitting HTML into the comment field; and because Javascript can be embedded into any valid HTML data stream, the PHP server parsed and rendered back to the web browser client whatever had been submitted as a comment. The marvel of this hack is that Wordpress comment moderators are often Wordpress administrators who have logged in with unlimited access to the site, so malicious Javascript embedded in a comment field on their web browsers would gain unauthorized, elevated privileges to execute against the entire website. The studious guy who found this vulnerability made a horrific demo of Javascript uploading content to the PHP server without the moderator's/administrator's knowledge. That demo leverages cross-site scripting (XSS) to upload the content but it could have injected malicious SQL into Wordpress.
I've respected the programmer's motto of being lazy and keeping things simple but after seeing this hack I wonder: have we become too lazy? Apparently the Wordpress comment functionality above wasn't unknown to developers. Wordpress designed the comment form this way as a feature -- Wordpress users wanted to "texturize" comments with options like italicized fonts, embedded hyperlinks, etc. so Wordpress developers enabled HTML parsing of comments by the PHP engine. They were actually doing a kind of data filtering of the comment field but did not thoroughly sanitize it!
In the first chapter of his concise work PHP Security, Shiflett's rationale for bringing up thorough data scrubbing at the very start of his book harkens back to the CVE reports I've cited before:
The vast majority of security vulnerabilities in popular PHP applications can be traced to a failure to filter input. (pg. 21)
A bit of Googling on problems with PHP programmers filtering data returned some disturbing practices. For example, PHP had come up with a global requirement for quoted data to be escaped to prevent SQL injections but Shiflett notes that in reality this caused complications that encouraged programmers to fall back to merely stripping data of quotes or slashes instead of checking for valid data. When this global quoting requirement troubled one PHP programmer, not a single suggestion included either checking for valid data or using best practices in filtering data. Shiflett suggests using standard data sanitation functions in PHP, including htmlentities() for interacting with front-end data (and its exhaustive ENT_QUOTES parameter), mysql_real_escape_string() for back-end data, etc. Why were these functions not included in answers for the PHP programmer having issues with quoted data?
To go back to the above case of the Wordpress vulnerability, what constitutes "thorough" data sanitization? In the most strict sense, the Wordpress fix required not allowing HTML input from the user that hadn't been predefined. To keep that feature in tact, Wordpress developers had done data filtering that only allowed a subset of HTML to be valid input, but that kind of sanitation was undermined by forgetting another data sanitization process: not checking for valid data size. The hack demo'ed above, after all, leveraged a very lengthy comment field. So the vulnerability was incomplete or superficial data sanitation. If I extrapolate these mishaps to web application security for Ruby, then there isn't a magic pill. Ruby or Rails might have more programmatic means for data sanitation but the same fundamental process applies. (I'll leave Ruby data sanitization for a follow-up blog.) Generally speaking then, I don't buy the excuse that vulnerabilities are merely a lack of forethought. It gets back to how we program.
Preventing Bugs
One of my coworkers boiled this entire blog down to "bad coding" but I think that is laziness itself speaking on our behalves. The takeaway I got from writing about this XSS and SQL injection bug is the latent, perennial problem of feature requests over best practices. Allowing texturized comments was valued more than allowing thorough data sanitization. In the PHP / Wordpress examples above, data sanitation was needed for:
- String fields of maximum length
- String data input of only predefined HTML tags
More generally, the development lifecycle would include systematic means for exhausting the scrubbing of data input and output.
Programmers have a means for various testing, and can include a good kind of testing in-flight with unit tests. For every input or output operation, we should iterate over data that abuses the interface to determine whether more data sanitation is needed. This leads to one characteristic of Ruby and PHP where the languages differ. Ruby includes a standard, unit test out-of-the-box whereas PHP programmers must choose and install one of many frameworks to start doing them.
These lessons learned aren't news. Even good Ruby web applications will need to be written with the idea of systematically preventing bugs and weaknesses by exhaustively testing data sanitation. MITRE's common weaknesses database is blatantly sarcastic about their best practices being ignored by programmers, as attested by their reports. (See CWE-79, and CWE-434). To be fair, MITRE's recommendations can be harsh, like not trusting any input, even a PNG sourced by HTML, so some of the lessons boil down to classic security debates about functionality. And to be honest, I haven't typically tested for unexpected data, have only done the minimum in leveraging programmatic data filtering, and I've never included unit tests in my own programs. These are all recommendations I make knowing that being lazy isn't an excuse. What is left unknown is whether PHP programmers are more lazy than Ruby programmers. :)
Labels:
best practices,
book,
dev,
developer,
development,
online,
php,
ruby,
security,
technical,
testing,
vulnerability hacking,
web
Wednesday, May 6, 2015
Security: The Backstory to Home Depot
The class action lawsuit filed in court against Home Depot reveals a telling backstory that made a hacker's dream come true, and it started with a new chief of IT security in 2011:
- hired an "enforcer" manager
- "bullying" and "abrasive" - descriptions of Jeff Mitchell, CISO (Chief Information Security Officer)
- high attrition
- around 50% loss of IT security employees after 3 months of Mitchell's promotion to CISO in 2011
- additional talent loss continuing to 2013
- perceived IT security as discretionary spending
- "We sell hammers." Matthew Carey, CIO
- Mr. Carey focused on technological improvements in the company’s supply chain
- "it’s going to interrupt the business”, Jeff Mitchell, CISO
- cost cutting in IT security
- loss of ability to hire top talent
- suspension of computer asset inventorying (Symantec Control Compliance Suite)
- suspension of regular risk analysis & reporting
- shelving IT security projects
- deferred POS encryption project
- suspended privileged computer account access auditing system (Cyber-Ark Software purchased but not implemented)
- ignored advanced intrusion detection firewall (Symantec NTP purchased but functionality never enabled)
- ignored IT security risks
- ignoring (and penalizing) a whistleblower in 2011
- both legal and IT security departments made no-action when an employee reported critical vulnerabilities in retails stores, except to dismiss the employee
- non-action on confidential POS vulnerability reports
- from FBI in early 2014
- from VISA in 2013
- non-action on remediation recommended by 3rd party IT auditors
- auditors / consultants flagged company software as outdated and unpatched
- FishNet Security consulted
- Symantec Corp consulted
- random & incomplete internal security audits
- computer systems "desperately out of date" - Francis Drake, CEO
- near EOL (End Of Life) softwares in production
- the POS machines targeted by hackers used Microsoft Windows XPe "XP Embedded", an OS that was 13 years old
- the version of anti-virus, Symantec EndPoint, was 7 years old at the time of the hacking
- routine software patching replaced with ad hoc updates
Thursday, April 16, 2015
Investing: Comparing Online Advisors
Comparing Online Investment Advisors
So much investment advice, so little time (and money). There's many personal investment advisors online now using analytics to give cookie-cutter advice. The upside to these online services is the promise of greater transparency and a consolidated "pane of glass" view of your accounts across multiple brokers or institutions. I think those promises are valuable for self-driven (or un-managed) investing, even if they can't replace a professional personal investor.
I wanted to compare these "online advisors" side by side. My comparison is based on (my) typical use of their interfaces and it's been an enlightening review. After each of them sucked in my investement accounts, it was visually obvious which ones were differentiators. I list each advisor's 6 month returns for my personal investments -- not for bragging rights -- to indicate that some large variances exist even in the "hard" numbers. This makes any of their numbers or advice suspect, so buyer beware.
From the chart below, it's pretty clear to me that I'll be keeping PersonalCapital around. Openfolio has a nifty crowdsourcing type of approach that will be fun to watch too.
* PersonalCapital can calculate performance including P2P lending returns
** https://www.nextcapital.com/pdfs/return_calculation_details_linked_daily_return.pdf
*** examples: PersonalCapital differentiates international bonds from US bonds, NextCapital & PersonalCapital differentiate industry type
So much investment advice, so little time (and money). There's many personal investment advisors online now using analytics to give cookie-cutter advice. The upside to these online services is the promise of greater transparency and a consolidated "pane of glass" view of your accounts across multiple brokers or institutions. I think those promises are valuable for self-driven (or un-managed) investing, even if they can't replace a professional personal investor.
I wanted to compare these "online advisors" side by side. My comparison is based on (my) typical use of their interfaces and it's been an enlightening review. After each of them sucked in my investement accounts, it was visually obvious which ones were differentiators. I list each advisor's 6 month returns for my personal investments -- not for bragging rights -- to indicate that some large variances exist even in the "hard" numbers. This makes any of their numbers or advice suspect, so buyer beware.
From the chart below, it's pretty clear to me that I'll be keeping PersonalCapital around. Openfolio has a nifty crowdsourcing type of approach that will be fun to watch too.
Advisor | 6mon Return Sample | Advice Sample | Big Differentiators | Worst Detractors |
---|---|---|---|---|
SigFig | 9.7% | buy bonds, sell high expense ratio position | manage accounts (free upto $10k) | short performance analysis (1yr) |
FutureAdvisor | 9.4% | buy bonds, buy US stock | manage accounts (paid service) | short performance analysis (6mon), no mobile app |
NextCapital | 9.4% | not offered | long performance analysis (start of holdings), calculates α (paid service), calculates account and holding fees / taxes (paid service), discloses algorithm for calculating returns**, granular asset class differentiation*** | no personalized advice or management service, no mobile app |
Personal Capital | 11.5%* | buy US stocks, sell international stock | connects to P2P accounts*, historical performance extrapolation back to 1992, calculates account and holding fees / taxes, granular asset class differentiation***, multi-factor authentication | |
Openfolio | 9.8% | increase Sharpe ratio, sell high expense ratio positions | compare to other investor porfolios | short performance analysis (Jan '14), no mobile app |
* PersonalCapital can calculate performance including P2P lending returns
** https://www.nextcapital.com/pdfs/return_calculation_details_linked_daily_return.pdf
*** examples: PersonalCapital differentiates international bonds from US bonds, NextCapital & PersonalCapital differentiate industry type
Friday, March 27, 2015
Cloud: Round 2, Developing *in* the Cloud, for the Cloud
I went to a local Rails workshop -- wahoo! -- and the young, teenag-ish instructor got us students, including an old school vim user like myself, hooked up on Nitrous.io. This IDE looks more intuitive than Cloud9 (from my previous, "Round 1" blog), and has gets lots of kudos from the programming trenches, although it would seem to have fewer holistic options (aka., fewer points of integration than Cloud9). At least Nitrous.io has dabbled with touch interface programming whereas Cloud9 has ignored the use case.
So here goes Nitrous.io for future development.
In other news, tmux came up in discussion. Looks like the teenagers are liking the old screen functionality again and resurrecting its core features from the grave of the Graybeards.
Thursday, February 12, 2015
Cloud: The Security Debate
A pro-public, anti-private Cloud article came out -- this time on Techrepublic -- that encouraged another debate on the insecurity of essentially all public services for IT. I interrupted a security colleague to chat about a hypothetical and she offered a human perspective on this perennial debate -- the perspective of control.
Here's my simple hypothetical. Suppose I have some deliciously personal data about you. Say this personal data is your full name, full address, credit card numbers, social security number, insurance card, etc. So it's personal, should be private, and obviously valuable data for fraudulent purchases like the kind that hackers love. Now let's assume I'm naive and push this bit of your personal data up to a public Cloud storage provide like Dropbox or Google Drive or One Drive. Big glaring honeypot -- that's what it'd be -- so give me the benefit of doubt and assume I encrypt the data first, then move it up to the public Cloud. And of course I'd encrypt with a very strong private key that's only on my smartphone because I'm an oddly paranoid person. (Yes that's unrealistic but let's go down this Rabbit Hole.)
What would a hacker do? It's trivial. A hacker would go after my smartphone; and not the public storage provider that has the valuable data. Personal smartphones are easier targets. I've seen many smartphone unlock codes, and some smartphones that have no code but a simple swipe. Sure the hacker would put in sizeable effort to steal my smartphone but that effort burned is less than (failing at) brute force decryption or hoping to find a bug in a crypto algorithm. This is a simplistic (and less realistic) hypothetical but I'm reiterating a glaring misunderstanding in Cloud security debates. We forget the weakest link or, in formal IT security terms, the results of "risk assessments". There would be several weak links in my example, like overlooking access to the public storage, transmission from the smartphone to the public data store, etc. but the weakest link is not data held in a public service that is strongly secured. Some paranoid IT professional will spurn my simplistic hypothetical by saying encryption algorithms have bugs and smartphones can be rooted to add extra layers of security, and sure that's true, but which is more secure? And better yet, why?
Again, this is an simplistic dichotomy -- personal smartphone with private encryption key versus encrypted private data in public storage -- that I've reduced down to a data-at-rest example but that doesn't change the fundamental concepts in securing technology. Most folks keep their smartphone under very close control, at least physically speaking. Just because you control a technology doesn't necessarily mean it's more secure. Our expectations of security in public softwares, for example, was challenged by the Open Software movement[1]. My colleague hinted at latent desires for IT folks to control technology, and I believe human nature creeps into our debates, especially when you hear folks trusting things that they control. I think the best security would come from distrusting all technology whether you control it or not but that would be a boring, xenophobic world indeed. Sane consumers don't hoard money under the bed but entrust a Bank to secure it on their behalf. Good banking consumers simply double check the Bank's numbers against their own accounting, and don't deposit more than the FDIC ensures :)
Having admitted to a desire for control, and inferring a sense of security from that control, it becomes clear that the Cloud-versus-onprem security debates are pitting equal weaknesses against each other. Realistic risk assessments are needed for both technologies.
References
1 - OPEN SOURCE VS. CLOSED SOURCE SOFTWARE: TOWARDS MEASURING SECURITY http://www.icsi.berkeley.edu/pubs/networking/opensource09.pdf
Here's my simple hypothetical. Suppose I have some deliciously personal data about you. Say this personal data is your full name, full address, credit card numbers, social security number, insurance card, etc. So it's personal, should be private, and obviously valuable data for fraudulent purchases like the kind that hackers love. Now let's assume I'm naive and push this bit of your personal data up to a public Cloud storage provide like Dropbox or Google Drive or One Drive. Big glaring honeypot -- that's what it'd be -- so give me the benefit of doubt and assume I encrypt the data first, then move it up to the public Cloud. And of course I'd encrypt with a very strong private key that's only on my smartphone because I'm an oddly paranoid person. (Yes that's unrealistic but let's go down this Rabbit Hole.)
What would a hacker do? It's trivial. A hacker would go after my smartphone; and not the public storage provider that has the valuable data. Personal smartphones are easier targets. I've seen many smartphone unlock codes, and some smartphones that have no code but a simple swipe. Sure the hacker would put in sizeable effort to steal my smartphone but that effort burned is less than (failing at) brute force decryption or hoping to find a bug in a crypto algorithm. This is a simplistic (and less realistic) hypothetical but I'm reiterating a glaring misunderstanding in Cloud security debates. We forget the weakest link or, in formal IT security terms, the results of "risk assessments". There would be several weak links in my example, like overlooking access to the public storage, transmission from the smartphone to the public data store, etc. but the weakest link is not data held in a public service that is strongly secured. Some paranoid IT professional will spurn my simplistic hypothetical by saying encryption algorithms have bugs and smartphones can be rooted to add extra layers of security, and sure that's true, but which is more secure? And better yet, why?
Again, this is an simplistic dichotomy -- personal smartphone with private encryption key versus encrypted private data in public storage -- that I've reduced down to a data-at-rest example but that doesn't change the fundamental concepts in securing technology. Most folks keep their smartphone under very close control, at least physically speaking. Just because you control a technology doesn't necessarily mean it's more secure. Our expectations of security in public softwares, for example, was challenged by the Open Software movement[1]. My colleague hinted at latent desires for IT folks to control technology, and I believe human nature creeps into our debates, especially when you hear folks trusting things that they control. I think the best security would come from distrusting all technology whether you control it or not but that would be a boring, xenophobic world indeed. Sane consumers don't hoard money under the bed but entrust a Bank to secure it on their behalf. Good banking consumers simply double check the Bank's numbers against their own accounting, and don't deposit more than the FDIC ensures :)
Having admitted to a desire for control, and inferring a sense of security from that control, it becomes clear that the Cloud-versus-onprem security debates are pitting equal weaknesses against each other. Realistic risk assessments are needed for both technologies.
References
1 - OPEN SOURCE VS. CLOSED SOURCE SOFTWARE: TOWARDS MEASURING SECURITY http://www.icsi.berkeley.edu/pubs/networking/opensource09.pdf
Labels:
article,
misperceptions,
news,
open source,
security
Tuesday, January 27, 2015
Cloud: Top 3 Concept Shifts
There's a lot of advice (and hype) about Cloud technologies yet some recent articles around AWS reminded me how our expectations of traditional hosting have changed. A business used to simply transition from one hosting provider to another via a simple "fork lift" between datacenter providers. After working with AWS clients for just a year, I've found a few Cloud enablement specialists who echo sentiments that a traditional datacenter move won't work nowadays because of fundamental progress that's been made in infrastructure services. I went to Amazon's HQ a few months ago for an architecting class where we stepped through migration details that revealed a kind of concept shift for technology migrating to any Cloud provider.
Here's my top 3 concept shifts that are a kind of learning curve to technologists who are moving from traditional IT to Cloud:
Here's my top 3 concept shifts that are a kind of learning curve to technologists who are moving from traditional IT to Cloud:
- pet versus herd - the "pet versus herd" mentality is a crude term that reduces compute resources to a commodity yet exposes emotional baggage. Technologists traditionally treated their systems like pets, including the humorous personification of computers, whereas Cloud treats systems like herds, including the expected losses and gains of components. This has little to do with DR (Disaster Recovery) and everything to do with perspective. A less caricatured comparison is seeing how the traditional datacenter design goal was to provide always-on services by keeping the component instances always-up, while the Cloud design expects components to be in various states of utility but the services themselves are to never be interrupted. This design shift has been epitomized by Netflix's Chaos Monkey.
- Infrastructure as code - some have assigned the concepts of "infrastructure as code" to a new role called DevOps, but whatever you call this change the fundamental shift is:
- a) from infrastructure responding to the application
- b) the application requesting infrastructure.
- don't repeat yourself - a key differentiator with Cloud versus traditional datacenter hosting is using services the Cloud provider offers. Some of these services are akin to SOA (Service Oriented Architecture) but that has been around for awhile, instead I mean the IaaS (Infrastructure as a Service) concept that keeps popping up in Cloud discussions, as well as automation. Automation always reeks of job insecurity in IT but actually addresses long standing waste in operations where we think our infrastructure requires some unique solution but really boils down to reinventing the wheel. The shift is similar to developers switching to "don't repeat yourself" frameworks.
- eMail and messaging
- virtual machines and images
- network storage and disk volumes
- virtual and private networks
- firewalls and routers
- databases and datasets
- authentication and access mechanisms
- logging and auditing
- load distribution
- web services
References:
Labels:
cloud,
configuration,
development,
devops,
iaas,
operations,
trends
Subscribe to:
Posts (Atom)