HIPAA or Giraffe?

            When we (in the INFOSEC community) think of HIPAA, we usually think of the security implications and requirements. That’s our perspective, and what’s important to us, as practitioners. The law, on the other hand, has very little to with security-- most of the security-related content is wedged into the law’s Section 264, which basically tasks the head of the US Health and Human Services Administration to go figure out what protections should be put on medical information for individual patients. When the law is copied from the Web to MSWord, Section 264 comes to about a page of text, while the entire law is 178 pages.

You can find it here:

https://www.govinfo.gov/content/pkg/PLAW-104publ191/html/PLAW-104publ191.htm

 

            The weird thing, from where I sit, is that this law, which is purported to enhance the security of patient data, does pretty much the opposite. The law encourages (just short of a mandate) putting all American medical data into an electronic format, according to a template that the law also tasks the federal government with creating. My question: what is more secure-- paper records or electronic records?

 

            - Paper records can be stolen, modified, or destroyed, assuming an attacker gain get physical access to them. Major or minor disasters, such as fire and flood, could likewise destroy/damage physical records. However, copying these records, or modifying them in a quasi-undetectable way, is a cumbersome, time-consuming process: the attacker would have to capture the data with the use of a device (a camera or photocopier), usually page-by-page, and typically with a light source present. Even stealing paper records is somewhat difficult: paper files are fairly heavy, and quite unwieldy...stealing the records of, say, 1,000 patients (if each record is 100 pages long, which is actually a fairly small patient record), would be impossible for a single attacker, without using a tool like a forklift or handcart, and making several trips between where the records are stored and where the attacker wants to transport them (say, a vehicle).

 

            - Electronic records are easy to steal in bulk: a file or a thousand files or a million files can be moved, erased, copied without much difference in effort (granted, there may be a considerable difference in the time required to copy a million files and a single file, but compared to the time it would take to copy a million hardcopy files, this duration is negligible). Modifying a single file, or a hundred files, or a thousand, through the use of an automated script, in an otherwise-undetectable manner, would be much easier than trying to physically change a paper record. And electronic theft/destruction/modification can be done remotely: the attacker never needs to have physical access to the data in order to harm it. Electronic media (drives, tapes, etc.) are still susceptible to physical disasters like fire and flooding.

 

            With that said, an electronic record can be duplicated easily for archival (the same quality that makes it easy to steal also makes it easy to make backups in order to multiple copies that might be stored in different locations, and thus survive a disaster). An electronic record can be readily encrypted/decrypted by the owner; this would be just about impossible to do with paper records, in any reasonable way. And electronic data store, and each individual file, can be subject to logging and monitoring in a way that is impossible for hardcopy: a piece of paper cannot tell its owner how many eyeballs have seen it.

 

            I’m not really sure the answer to every security issue is “put it on a computer.” Conversely, I’m not a Luddite, either: I don’t think we should stick to archaic modes of data processing and communication just to avoid security issues.

            However, I think this law is a perfect example of how attempting to codify security through a given practice/measure can, instead, harm that very same goal. I don’t think there was much of a market for ransoming patient data before HIPAA, and I don’t think hospitals and doctors had much of an IT security budget before data was converted to electronic form (which, again, is not always the best policy: the 414s hacking crew demonstrated all the way back in the 1980s that medical equipment/services could be harmed remotely). But there are also unintended consequences of efforts such as the HIPAA legislation; one of these is that the cost of medical care in the United States continues to escalate, and the cost of compliance for laws such as this make it harder for new, innovative, small providers to enter the market and compete.

            So was this law useful for patients? Or did it harm them -from both a security perspective and access to healthcare- overall?

            I don’t have much info about it. Glad to hear whatever anyone else has to contribute, in the comments or in private messages.

 

 

 

 

 

 

 

 

 

2018 In Review

[Note: this piece was originally supposed to run in an industry journal, but the legal department killed it, even though the editors enjoyed it.]

            Making written predictions about anything is a fool’s errand; there are so many, many ways to be wrong. This is what happened to me when I last wrote for this esteemed publication, in 2014; the publishers were so put off by my wildly inaccurate prognostications that I’ve not been allowed back for three years. In fact, it’s only because of the recurrence of Mike Chapple’s painful campground-related scurvy that this piece appears here (and I am sure I am not alone in wishing Mike a speedy recovery, and that we are all eagerly awaiting what will be sure to be his definitive take on “Kitten Posters -- Raising Security Awareness....and Brightening Your Day!”).

            Because of the ugliness and general scoffing that have been the mainstay of my email inbox for the past 36 months, I have decided not to make predictions: I am going to cheat. While this may cost me both my good standing in several professional organizations and my freedom (I will be violating several international agreements in this process), I think it’s worth my pride.

            I am making use of the tachyon-based communication system to send this message backward in time, from the year 2027, giving me a pretty good perspective on what is about to happen to your industry. This method of communication was itself developed in-- well, never mind....you’ll see, and I’ve broken enough laws already. So here it is: the big goings-on in the INFOSEC field, circa 2018:

            -- The public executions of the Equifax security staff went off without a hitch, and also carried the highest per-home viewership share since the final episode of M*A*S*H was broadcast. It was seen as a just outcome, not because the transgressors were incompetent (though that argument was definitely made), but because of their cavalier acts of last-minute profiteering before announcing the breach, which were so callous and calculating. Of course, the executions only mollified the citizenry, who were only too glad to move on to the next news cycle tidbit, and did nothing to either modify behavior by security practitioners, nor have any substantial effect on the legal system, or, indeed, even change the hiring practices of organizations looking for security personnel. And Equifax, as you’ll soon see, was able to Arthur Accenture itself into a new incarnation and suffer absolutely no ill effects to its market share or profitability.

            I mean-- come ON: we know that security and IT people are, by far, the worst violators and insider threats, both in term of frequency and scale...and nothing ever changes. Mainly because everyone wants to pretend otherwise. That doesn’t change in the next decade, either, so all our phony-baloney jobs are safe.

            -- The Chinese stunt of spooky entanglement in orbit (and no, that’s not me using florid prose: that’s actual terminology from the domain of quantum computing, proving that physicists can party as hard as anyone else) in 2017 led to some rather fast progression in that field in the following year. Quantum computing came faster than most predicted...and, with it, quantum cryptography...and then became pretty much a non-event. The machines got faster, and the way to break crypto became easier, then the crypto got more complex, all in quick succession, so it was pretty much business as usual, albeit with much bigger numbers.

            -- It was the tail end of 2018 and the beginning of 2019 when organizations started moving out of the cloud. Well, not so much out of the cloud, but away from the cloud as a managed service. When legislation started appearing in different countries, putting legal liability for malicious/negligent behavior leading to data breaches on the provider instead of the customer, prices for cloud services shot through the roof...and somebody smart (I won’t say who-- wait for it, you’ll be surprised) pointed out that having cloud managed services wasn’t really revolutionary, it was just two steps backward into the old timesharing/process waiting mainframe model (okay, screw it: it was Bruce Schneier....and yes, nobody was surprised). Managers who had created c.v. bullet points for moving their enterprise IT into the cloud suddenly realized they could create even more bullet points by moving the enterprise out, and investors did as investors always do: ignored the stupid management decisions that happened before, and lauded the new management decisions as the best thing EVAR, which would surely lead to golden streets and free cotton candy for everyone.

            -- It wasn’t quite 2018 when it happened, but that was the year the seeds were sown for the end of privacy...which would eventually lead to real security and topple some of the elements that had historically been viewed as fundamental to the nation-state. It was a politician in Wyoming that figured it out: she realized that we only need privacy for things we’re not proud of. She was also running against a three-term incumbent, representing a third party, and fighting a combined doom of indolent, bored voters and an unimaginative media machine that hasn’t done the public any favors since inventing coupons. Maybe that’s why she did it...but it was the first handful of snow in the avalanche. She donned a wearable streaming camera and uploaded all of her interactions, work, meetings, and discussions to the Web, allowing every member of the public to view her actions and conversations, giving them both a direct feed into her true character and beliefs as well as a prurient voyeuristic opportunity that couldn’t be beat (she did turn off the camera when she went into the bathroom or bedroom, but that was only because of her 20th-century hangups; after she won, every subsequent candidate stepped over each other trying to out-transparent the other, and released everything they did to public review, including a great deal more snoring as a result of deviated septums than anyone ever expected). She not only made politics interesting again, but put the first nail in the coffin for privacy: people realized that safety did not come through obscurity, but by ownership of their own behavior, and there was no shame where there is mutual repugnance and commonality of banal wrongdoings.

            Of course, without the excuse, “we need secrecy to keep you safe, and we can’t tell you why,” governments lost a great deal of power as well, so many resisted, but it was a losing fight: individuals ended up with more power, freedom, wealth, and safety than they had when governments had primacy. This openness also ended the illusion of widespread monogamy, but by 2018 nobody was really buying into that canard anyway, and it’s not germane to INFOSEC, so I won’t address it here.

            -- The Data Slip was unusual. There had been doomsday predictions about the Y2K bug, but nobody saw the Slip coming, and it was weird that entire swaths of data were just gone, for no reason anyone could quite determine (I could tell you, but that would spoil things, so I won’t). Suffice it to say, after the initial freakouts, and some panicky hyperbole from the media and eckspurts, the most interesting thing about the Slip was that everyone was able to just go back three days and resume life with slightly-older numbers (bank accounts, bills, grades, etc.) without nearly as much fuss as anyone would have guessed. It proved that systems are resilient, even when other systems, on which they’re dependent, fail. It also demonstrated that Resetting could basically serve as a giant “do-over” for entrenched and failing systems...it was proposed that the same be done for systems where data had become stagnant and beyond rescue, like Social Security, markets on the precipice of collapse, and Major League Baseball. However, I won’t tell you which were chosen for reboot, and which just went away because they were awful (like Major League Baseball).

            Anyway,  that’s what you have to look forward to. Don’t be alarmed: everything keps getting better and better. If not...well, come find me in the future, and help me fix my tachyon transponder.

 

 

Is your personal information worth anything to you?

Back in 2004, I wrote an article about how various entities make money off transactions involving the personal information of customers and citizens (which, in some cases, such as the DMV in many US states, are the same group). [That article kinda predicted how access to personal data could be acquired rather easily by someone posing as a legit customer of third-party data verification services, like TML's TravelCheck...only about 18 months before Choicepoint was dinged by federal regulators for allowing exactly that kind of illicit disclosure to happen.] I suggested that private entities wouldn't start being serious about data security until customers started realizing the inherent value of their own personal information.

I was totally wrong about that. Private entities now engage in data security practices (or at least pretend to, by expending a modicum of effort and money), but not because of how their customers feel about personal privacy: instead, those private entities are much more concerned about regulatory compliance.

A lot has happened in the intervening 13 years since that first article, including many breaches of massive databases, revealing volumes of personal customer data. Customers have also become a lot more computer-friendly, and are using personal devices to conduct online shopping and ecommerce transactions at a rate that is vast compared to even a decade ago. They also claim to be extremely concerned about "privacy" (whatever that means, when individuals are asked in surveys on the topic), and have some awareness of threats like identity theft and hacking of personal accounts/files/assets and scams.

The weird part is, they don't behave as if they really understand the value of their own data...or as if they're truly frightened about any impact its loss would cause. The market share of companies like Target, Home Depot, TJ Maxx has not declined significantly, even though those entities have demonstrated that they aren't the best stewards of customer data. And experiments have demonstrated that individuals are likely to part with their own passwords in exchange for incentives as basic as candy bars.

I don't think this a shortcoming of the private sector, specifically; we know governments aren't any better at protecting information that's been entrusted to them. (And I, for one, have chosen to behave accordingly; even though I might shop at Home Depot and Target, I am not going to take any job with the US federal government that would require a security clearance, because the USG has proven that it is very good at losing my personal information.)

But customer/citizens/individuals just don't seem to care about if their data is protected, or how it is protected....even though those same individuals will say they care quite a bit.

So I have to ask...if people don't really care about the loss of their personal data (which we can tell from what they do, versus what they say), and the impact they experience from any actual loss is really pretty nominal (often more an inconvenience, and results in lost time, not lost assets), why do we have such a strict regulatory mandate in many jurisdictions? Why are there so many laws and standards in place to protect something that doesn't seem to really have much value?

It might be heresy to ask, but...are we at the point where "MORE SECURITY!!" is not actually the best approach, in terms of the interests of individuals? Does the cost of adding more and more protection to personal data raise the price of goods and services ultimately provided to individuals...and does that price increase go beyond what the average cost of a loss would be to each person?