My Impressions of RSA 2019

Multi mega edge cloud defense time perimeter virtual securi secura securahhh trans service auth train policy force intel synch remote filter.

What staggered me more than the products and services were the hackneyed and trite methods for seducing tradeshow passersby; the food, toys, barkers, tchotchkes, crap, celebrities, clothes, stickers, pins, bric-a-brac, come-ons, and flat-out bribes were overwhelming and underwhelming at the same time. There must (MUST) be a better way to get information out to the people who need and want it.

My favorite booth? Two people sitting at an empty desk, with no posters, background, or other frippery. I got their marketing info and card, and will post the name of the company when I get it out of my luggage.

…follow up…this was the company I was talking about: https://www.openiam.com/

I don’t know what they do. I don’t know if their product/service is any good. But I appreciated the approach enough that I’m going to find out.

Privately Bruce

One of my former students, the amazing Catherine Thiallier, was recently lucky enough to attend a fascinating event (one I wanted to go to, but was busy teaching, unfortunately): https://it-security-munich.net/event/talking-heads-bruce-schneier-alex-stamos/. She was also kind enough to write up a distillation of the event, and has agreed to let me share it here. So I will post her impressions, then offer my own comment on the topics discussed.

Full disclosure: I am a total Bruce Schneier fanboy. Even when I disagree with him. And I made some minor edits for translation/clarity.

“Hope you are well. I forgot to give you the update on the conference. Bruce Schneier is an amazing talker.. Unfortunately he had to leave early and couldn’t stay very long but still made a speech and answered at least 10 questions.

Content of his speech, basically (and I guess those are the topics he develops in his books):

- The Internet was not conceived to be secure

- We always think confidentiality is the most important of the CIA triad, but now it’s actually availability and integrity (I’m more concerned the brakes would fail on my car or somebody at the hospital changes my blood type than somebody reading information about me).

- Three big IT security fails:  patching (example: it’s not easily possible for your router at home). Authentication  (no scalable solution for IoT). And supply chain processes. And the keys are policies/regulations; can't blame the users for choosing the cheapest option.  The free market won’t reward the secure solution, it has to come from government/laws. He made the analogy that when he gets on a plane he doesn’t check the engines and go to the pilot and ask him to show him his degrees. Or a restaurant, he doesn’t check the kitchen. We are trusting the law. Key is trust in applied law.

That was in about the content of his speech. Then the second planned guest was sick, and instead they invited an amazing woman, Katie Moussouris, who created the bug bounty program at Microsoft, and gave a speech about vulnerability disclosure /pentest /bug bounties. She has an amazing and very inspiring personality. As (shame on me, but am still new in the field!) I never heard the term Bug Bounty I was a bit lost - now I know - so all in one, great event!!

I found some other stuff I noted and forgot to write: 

Last century the fundamental question was how much of my life is governed by the market and how much by the state. Now the question is how much of my life is governed by technology and with which rules.

Question from the audience: Whose fault is this? => the market rewards insecure solutions  (cheaper). Companies want security as long as they can spy on you.  But security OR surveillance by design=> can't be both!

Ok now I think that's all. I definitely want to read one of his books. Have you read "click here to kill everybody"? I loove the title already :)”

Fantastic stuff! And I’m very jealous she got to attend.

My quick comments on Bruce’s points:

- I think legislation/regulation are the worst mechanisms for enforcement...especially when it comes to IT. Mainly because government (especially the American government) has demonstrated, time and again, that it doesn’t care about privacy, and knows nothing about IT. And Bruce has been the one demonstrating this, to great effect (and some hilarity). Just one example: https://www.schneier.com/crypto-gram/archives/2003/0815.html#6. He’s also been one of the most vocal critics of TSA...which is a perfect example of what happens when you let the government do security.

- The market doesn’t reward secure solutions because the market doesn’t care. People say they care about their personal privacy...but then go ahead and refute the statement every chance they get. People will respond on a survey that they value security of their private data...but those same people will give up their passwords for a candy bar, and won’t stop shopping at a vendor who has demonstrated a complete lack of ability to secure that information.

- When the market does care, there are excellent mechanisms for making it much, much more efficient and effective than a government solution. Let’s use the food example Bruce mentioned: the US federal government is in charge of making sure our food is safe...and American citizens still die from eating poison lettuce. Conversely, a private, cooperative, voluntary, market-drive solution has been implemented (and paid for) by a small, interested group of citizens: the kosher inspection and markings on American food products. Without any government involvement at all, the American Jewish community has installed their own quality control/assurance process into food production, and it is trusted (as far as I know) by every member of that community (and also serves to protect nonmembers, who get the benefit of food inspection/approval even though they don’t pay for it, and probably don’t know it exists). When people do start to care about their information, they will demonstrate their caring by withholding money from entities that violate privacy, and reward entities that offer privacy protection. Until then, adding a law to protect something people only claim to care about is worse than useless: it’s harmful and dangerous and expensive.

 

But I’m still a giant fanboy. heh.

Thanks again, Catherine, for excellent summary of what seems to have been a great event!

 

Need-to-Know versus Least Privilege

Got into a great discussion in a recent class, about the difference between these two security concepts (indeed, some of the class thought there wasn’t even a difference). At the time, I was caught up in the conversation, and couldn’t construct a good example to clarify the distinction. But after a bit of time, I’ve formulated one that should do the job:

 

Alice and Bob are drivers/bodyguards for senior managers in the company they work for.

 

Both Alice and Bob have the correct permissions and access mechanisms to perform their duties (which are the same, for their respective managers): each driver has a passcard that will allow them access to the secure garage where the vehicles are stored; they each have authorization to check out keys for the vehicles used to transport the managers. Their passcards do not, however, allow them into other parts of the company property-- they can’t, for instance, use their passcards to enter the Research department, or Accounting. This is an example of least privilege-- they are only given a set of permissions necessary to perform their duties.

 

However, when Alice and Bob arrive at the garage to check out their respective vehicles, they are not given the route and destination of other managers-- only the manager they are driving/protecting that day. Alice cannot see the destination of Bob’s vehicle, and Bob can’t see Alice’s destination. That information is exclusively given only to the people involved in coordinating the movements of the specific senior managers, thus limiting the number of people who might compromise the security of that information. This is an example of need to know-- Bob does not need to know the destination of Alice’s vehicle.

 

To put it in general terms, least privilege usually has to do with clearances and roles, while need to know is typically based on which projects or customers a person is working on/for, and allows for compartmentalization.

 

While I may have done a disservice to the class in not coming up with this analogy earlier, I’m hoping it serves the purpose for anyone else confused about these concepts.

 

 

Stegotrojansaurus

IF YOU ARE STUDYING FOR A CERTIFICATION EXAM, STOP READING-- THIS IS PURELY ACADEMIC AND WILL ONLY CONFUSE YOU

 

 

When I explain steganography to my students, I usually say, “It’s a message in one medium put inside another medium-- more like encoding than cryptography.” I stress that steganography is NOT crypto, even though the topics always seem to be taught coincidentally. I often use the example of Jeremiah Denton, who, as a prisoner of war, blinked the word “torture” in Morse code while being forced to make propaganda films against his country (https://www.youtube.com/watch?v=rufnWLVQcKg). I talk about putting a text message inside the code for a .jpg, and so forth.

 

As almost always happens, a student in a recent class taught me something I did not know before. But this case was exceptional, because it was something that had simply never occurred to me at all, and I don’t think I’ve ever heard anyone else suggest it:

 

Trojan horse applications are a form of steganography.

 

It’s kind of perfect. The malware, which is a message of one medium (the executable), is hidden inside a message of another medium, such as a photo or movie or text document or whatever (sometimes-- there are examples of Trojans where both the malware and its carrier are executables, or there is just one executable with two aspects: one desirable to the victim, and one not).

 

This is purely a philosophical point: it doesn’t mean anything earth-shattering in the world of INFOSEC. But I love it when a student has a completely new take on some fairly old ideas. Blew me away. Good job, Ann-Kathrin.

Wandering Security

For the first time ever, I ran across a hotel business center (desktop PC and printer) that had the USB ports physically blocked out. I find that interesting only because I’ve often considered how easy it would be to introduce malware/whatever into a business center (and often hoped those machines are airgapped from the hotel’s production environment).

Of course, this was at a time when I needed to print something off a USB stick, instead of, say, an email I could access through a Web browser.

I found out that unplugging the keyboard would, yes, open a viable USB port that wasn’t limited to just human interface devices. Sure, I was limited to inputs from the mouse in order to manipulate the file (because, well— no keyboard), but it seems that someone put at least some good thought into locking down that system, but then left a giant pathway right through their control policy.

Not sure what the workaround would be, short of putting Super Glue on all the keyboard/monitor USB connections for every PC in every property in that hotel chain. Or going with thin clients that have peripherals that are hardwired and not connected by USB (come to think of it, with a very limited target functionality, why does the business center need full PCs, anyway?).

Anyone ever given any thought to this?

Anatomy Of A Troubleshooting Session

- I wake up. Sit down at laptop, quickly notice there are aberrant issues with the keyboard: certain keys do not work, but the rest do.

- I freeze all my work/open resources. Immediately start searching for info about malware that attacks only specific keyboard keys.

- Reboot.

- Do a quickscan. No hits.

- Look over my restore points, just to make sure I still have my current data.

- Check hardware drivers, make sure they are all up to date.

- Search for more info about malware, particularly for certain apps (browser/Office). Spend a good half-hour reading about funky viruses.

- Girlfriend wakes up. I tell her that some of the keys on my keyboard aren't working. She has no tech background whatsoever.

- "Dog hair?" she asks.

- Take out the can of compressed air, spray beneath keyboard.

Dammit. Problem solved.

Ditching the ALE

At this point in my career, I deliver a lot of certification prep content, through teaching and writing. And I see certain things that were included at the outset of the industry as guidelines and suggestions that just aren't applicable anymore (or at least, not applicable in the same way as when they were proposed). My primary customer is ISC2, for the CISSP and CCSP certs, but I've taught ISACA and CompTIA certification prep courses in the past, and many of them suffer from the same problems. While I can't say for certainty exactly why all the major INFOSEC certifications suffer from the same blind spots, I can guess: most of the test writers have the same training in the same fundamental concepts, get the same certifications (from multiple vendors), and have received that content from their predecessors, and will pass it to the next generation in kind.

This leads to the possibility of stagnancy in content and approach. Which isn't terrible, for certain fundamental security concepts (say, defense-in-depth/layered approach/multiple redundant controls, or the use of two-person integrity), but there are other notions/ideas that are simply treated as sacrosanct in perpetuity, instead of being re-examined for validity, assessed as nonsense, and thrown onto the trash pile of history.

Today, I want to talk about one of the latter: the ALE formula.

If you don't what it is, consider yourself lucky. Then consider yourself unlucky, because if you're going to go get an INFOSEC cert, I can tell you for damn sure that it's going to be one of the things you're going to have to learn and memorize whether you like it or not.

Simply put, it's an approach to estimating the cost of a given type of negative impact as the result of security risk being realized. We teach INFOSEC practitioners that this value determination can be used to weigh the possible costs of controls to address a particular risk, and figure out whether or not to spend the money protecting against it.

Which is a good idea: spending too much on addressing a particular threat is just as bad as not spending enough...and, arguably, sometimes worse, because spending too much leaves you with a false sense of security and a lack of money, where not spending enough just means you have some of that risk left.

But the ALE formula is not really the best tool to accomplish this in our realm of INFOSEC, for many, many reasons. And we should stop requiring its use, and teaching it to newbies.

Why? Well, for starters, let's talk about the potential cost of a single type of incident, known in the formula as the SLE.

It's worth noting that the ALE formula works great in the physical security universe, where tangible assets can be mapped to specific losses. If I'm trying to secure a retail space selling goods that are of a particular size, shape, weight, and cost, I know some discrete, objective information about those assets. I know how many can be stolen at one time, by a single person picking them up and walking off with them. I know the amount (number and dollar value) of my inventory, based on another limiting factor: the footprint of my retail space and storage area. I know the various access points to get at my inventory: the doors/windows/loading areas. All these things can be defined and somewhat limited.

With electronic data as assets, all this numeric determination goes out the window (I mean, not the literal window, like tangible assets, but a metaphorical window, because the determination is impossible). I can't really know how many "data"s a person can steal at any given moment, because the size of files or objects or characters don't really have any meaning in the physical universe-- a flashstick that weighs less than an ounce can carry one file or a thousand files, and any given file can contain one character, or a million characters, and all of this fits inside one person's pocket, anyway (and that person doesn't need any exceptional muscles to carry even the heaviest flashstick).

So trying to determine the monetary impact of a single security event involving data is impossible, unlike the impact of a single security event involving physical assets. If someone steals one spoon in a retail environment, we know the cost of that spoon (and we actually know several costs: the wholesale cost we paid to get the spoon, the retail cost of what we would have realized in revenue if we sold that spoon, and the logistical cost of getting that spoon to the retail location)...but if someone steals a file, the value of the information in that file can vary wildly. A file might contain a photo of the user’s pet kitten (which is of value only to the user, and then only arguably at that, if the user has a copy of the photo), or it can contain the privacy data of the target organization’s entire customer base, and the relevant monetary impact can stretch into the range of millions of dollars, as the result of statutory damages assessed against the organization, or the loss of market share, or direct fraud on the part of the perpetrator using that information, and so on.

Sure, insurance companies in recent years have created various approaches to assigning value to data, but these are all just gibberish. Take, for instance, the idea of “average file cost”-- even if we were to determine the midpoint of value between the kitten photo and the customer list, that medium value would be meaningless when we suffered an actual loss: if we lost the kitten photo, and the insurance claim paid the amount of “average cost,” we’d be receiving far more in cash payout than the thing was worth, and if we lost the customer list the “average cost” claim payout would be far less than the damage we’d suffered. And what’s the size/value of an “average” file, anyway? How many files are there in a given business environment? The concept is absolutely pointless.

When the SLE is just a fictional construct, the entire ALE formula is ridiculous. We could use just this argument to eliminate the wretched thing from our industry. But there are even more reasons why ALE is stupid in the INFOSEC world-- and I’ll get to those in subsequent articles.

 

 

The Benefits of Late Adoption

Perhaps my greatest shortcoming as a nerd is my reluctance for early adoption of technology; I simply have no interest in the latest, bestest, newest, coolest gadgets on the market.

Yes, this can cause me to lag in my estimation of IT solutions. Yes, I am mocked (and rightly so) by students and colleagues when I tell them I still have an AOL email account. Yes, I am old and everybody should get off my lawn. But there is also an upside to late adoption:

- Huge cost savings. Huge. I can wait two years for the novelty of a thing to wear off, and get a much-reduced price when I get around to buying it. This is especially true in software, and especially especially true for games.

- I'm never involved in the proof of concept. Back when I was a young (read: stupid) man, I bought the first year-model of a new car. Within the first year of owning it, all the defects and design problems inherent in that model became quickly apparent, and there were multiple recalls. Waiting a while to buy a thing means that the first wave of customers have taken the brunt of field testing, and the thing is now ready for actual regular use.

- No false sense of security. The latest suite of products are often seen as inviolable, because they use the latest security protocols and tools; this can lead to sloppy practice and habits (like crafting and transmitting data with sensitive info, even when it could be avoided) because users feel a reliance and trust for the product. This puts them one zero-day exploit away from feeling very silly.

- Strangely enough, legacy platforms may be more secure in some ways than their new-fangled replacements...mainly because aggressors won't actually believe that those legacy products are still being used for viable purposes, and won't include legacy attack methods/gear in their toolkits. I mean, I really don't think the script-kiddies even know what AOL is, much less how to hack it. Sure, a dedicated adversary won't have a tough time getting the proper attack tools once they know a target is using a legacy system, but a dedicated adversary is going to get in eventually, regardless of the age of your platform.

- Utility/productivity is always a tradeoff with risk and security. The more I can do with a tool, the more I can lose. Losing a 256K flashstick in a hotel lobby will cause me a lot less damage than dropping a 2Tb flashstick. My old flipphone had no identifying data on it (other than some texts and a rudimentary Contacts list), in stark contrast to my smartphone (which, I think, has my DNA, cocktail preferences, innermost thoughts, and secret cookie cravings embedded in the BIOS).

No, I'm not saying that everyone should immediately regress to a Luddite position of rolling back three generations of tech in order to gain some slight advantage...but buying up the latest and greatest shiny boxes and zippy software is not the best choice, either.

 

 

My Favorite (and Least Favorite) Security Moment of 2017

I was preparing to teach a class in another city, and communicating via email with the POC at the client site. In addition to explaining about the location, parking, and so forth, the POC included this tidbit:

"Upon first entering the facility, you can pick up your security badger at the reception desk."

I have never, ever, been more disappointed by a typo.

2018 In Review

[Note: this piece was originally supposed to run in an industry journal, but the legal department killed it, even though the editors enjoyed it.]

            Making written predictions about anything is a fool’s errand; there are so many, many ways to be wrong. This is what happened to me when I last wrote for this esteemed publication, in 2014; the publishers were so put off by my wildly inaccurate prognostications that I’ve not been allowed back for three years. In fact, it’s only because of the recurrence of Mike Chapple’s painful campground-related scurvy that this piece appears here (and I am sure I am not alone in wishing Mike a speedy recovery, and that we are all eagerly awaiting what will be sure to be his definitive take on “Kitten Posters -- Raising Security Awareness....and Brightening Your Day!”).

            Because of the ugliness and general scoffing that have been the mainstay of my email inbox for the past 36 months, I have decided not to make predictions: I am going to cheat. While this may cost me both my good standing in several professional organizations and my freedom (I will be violating several international agreements in this process), I think it’s worth my pride.

            I am making use of the tachyon-based communication system to send this message backward in time, from the year 2027, giving me a pretty good perspective on what is about to happen to your industry. This method of communication was itself developed in-- well, never mind....you’ll see, and I’ve broken enough laws already. So here it is: the big goings-on in the INFOSEC field, circa 2018:

            -- The public executions of the Equifax security staff went off without a hitch, and also carried the highest per-home viewership share since the final episode of M*A*S*H was broadcast. It was seen as a just outcome, not because the transgressors were incompetent (though that argument was definitely made), but because of their cavalier acts of last-minute profiteering before announcing the breach, which were so callous and calculating. Of course, the executions only mollified the citizenry, who were only too glad to move on to the next news cycle tidbit, and did nothing to either modify behavior by security practitioners, nor have any substantial effect on the legal system, or, indeed, even change the hiring practices of organizations looking for security personnel. And Equifax, as you’ll soon see, was able to Arthur Accenture itself into a new incarnation and suffer absolutely no ill effects to its market share or profitability.

            I mean-- come ON: we know that security and IT people are, by far, the worst violators and insider threats, both in term of frequency and scale...and nothing ever changes. Mainly because everyone wants to pretend otherwise. That doesn’t change in the next decade, either, so all our phony-baloney jobs are safe.

            -- The Chinese stunt of spooky entanglement in orbit (and no, that’s not me using florid prose: that’s actual terminology from the domain of quantum computing, proving that physicists can party as hard as anyone else) in 2017 led to some rather fast progression in that field in the following year. Quantum computing came faster than most predicted...and, with it, quantum cryptography...and then became pretty much a non-event. The machines got faster, and the way to break crypto became easier, then the crypto got more complex, all in quick succession, so it was pretty much business as usual, albeit with much bigger numbers.

            -- It was the tail end of 2018 and the beginning of 2019 when organizations started moving out of the cloud. Well, not so much out of the cloud, but away from the cloud as a managed service. When legislation started appearing in different countries, putting legal liability for malicious/negligent behavior leading to data breaches on the provider instead of the customer, prices for cloud services shot through the roof...and somebody smart (I won’t say who-- wait for it, you’ll be surprised) pointed out that having cloud managed services wasn’t really revolutionary, it was just two steps backward into the old timesharing/process waiting mainframe model (okay, screw it: it was Bruce Schneier....and yes, nobody was surprised). Managers who had created c.v. bullet points for moving their enterprise IT into the cloud suddenly realized they could create even more bullet points by moving the enterprise out, and investors did as investors always do: ignored the stupid management decisions that happened before, and lauded the new management decisions as the best thing EVAR, which would surely lead to golden streets and free cotton candy for everyone.

            -- It wasn’t quite 2018 when it happened, but that was the year the seeds were sown for the end of privacy...which would eventually lead to real security and topple some of the elements that had historically been viewed as fundamental to the nation-state. It was a politician in Wyoming that figured it out: she realized that we only need privacy for things we’re not proud of. She was also running against a three-term incumbent, representing a third party, and fighting a combined doom of indolent, bored voters and an unimaginative media machine that hasn’t done the public any favors since inventing coupons. Maybe that’s why she did it...but it was the first handful of snow in the avalanche. She donned a wearable streaming camera and uploaded all of her interactions, work, meetings, and discussions to the Web, allowing every member of the public to view her actions and conversations, giving them both a direct feed into her true character and beliefs as well as a prurient voyeuristic opportunity that couldn’t be beat (she did turn off the camera when she went into the bathroom or bedroom, but that was only because of her 20th-century hangups; after she won, every subsequent candidate stepped over each other trying to out-transparent the other, and released everything they did to public review, including a great deal more snoring as a result of deviated septums than anyone ever expected). She not only made politics interesting again, but put the first nail in the coffin for privacy: people realized that safety did not come through obscurity, but by ownership of their own behavior, and there was no shame where there is mutual repugnance and commonality of banal wrongdoings.

            Of course, without the excuse, “we need secrecy to keep you safe, and we can’t tell you why,” governments lost a great deal of power as well, so many resisted, but it was a losing fight: individuals ended up with more power, freedom, wealth, and safety than they had when governments had primacy. This openness also ended the illusion of widespread monogamy, but by 2018 nobody was really buying into that canard anyway, and it’s not germane to INFOSEC, so I won’t address it here.

            -- The Data Slip was unusual. There had been doomsday predictions about the Y2K bug, but nobody saw the Slip coming, and it was weird that entire swaths of data were just gone, for no reason anyone could quite determine (I could tell you, but that would spoil things, so I won’t). Suffice it to say, after the initial freakouts, and some panicky hyperbole from the media and eckspurts, the most interesting thing about the Slip was that everyone was able to just go back three days and resume life with slightly-older numbers (bank accounts, bills, grades, etc.) without nearly as much fuss as anyone would have guessed. It proved that systems are resilient, even when other systems, on which they’re dependent, fail. It also demonstrated that Resetting could basically serve as a giant “do-over” for entrenched and failing systems...it was proposed that the same be done for systems where data had become stagnant and beyond rescue, like Social Security, markets on the precipice of collapse, and Major League Baseball. However, I won’t tell you which were chosen for reboot, and which just went away because they were awful (like Major League Baseball).

            Anyway,  that’s what you have to look forward to. Don’t be alarmed: everything keps getting better and better. If not...well, come find me in the future, and help me fix my tachyon transponder.

 

 

Is your personal information worth anything to you?

Back in 2004, I wrote an article about how various entities make money off transactions involving the personal information of customers and citizens (which, in some cases, such as the DMV in many US states, are the same group). [That article kinda predicted how access to personal data could be acquired rather easily by someone posing as a legit customer of third-party data verification services, like TML's TravelCheck...only about 18 months before Choicepoint was dinged by federal regulators for allowing exactly that kind of illicit disclosure to happen.] I suggested that private entities wouldn't start being serious about data security until customers started realizing the inherent value of their own personal information.

I was totally wrong about that. Private entities now engage in data security practices (or at least pretend to, by expending a modicum of effort and money), but not because of how their customers feel about personal privacy: instead, those private entities are much more concerned about regulatory compliance.

A lot has happened in the intervening 13 years since that first article, including many breaches of massive databases, revealing volumes of personal customer data. Customers have also become a lot more computer-friendly, and are using personal devices to conduct online shopping and ecommerce transactions at a rate that is vast compared to even a decade ago. They also claim to be extremely concerned about "privacy" (whatever that means, when individuals are asked in surveys on the topic), and have some awareness of threats like identity theft and hacking of personal accounts/files/assets and scams.

The weird part is, they don't behave as if they really understand the value of their own data...or as if they're truly frightened about any impact its loss would cause. The market share of companies like Target, Home Depot, TJ Maxx has not declined significantly, even though those entities have demonstrated that they aren't the best stewards of customer data. And experiments have demonstrated that individuals are likely to part with their own passwords in exchange for incentives as basic as candy bars.

I don't think this a shortcoming of the private sector, specifically; we know governments aren't any better at protecting information that's been entrusted to them. (And I, for one, have chosen to behave accordingly; even though I might shop at Home Depot and Target, I am not going to take any job with the US federal government that would require a security clearance, because the USG has proven that it is very good at losing my personal information.)

But customer/citizens/individuals just don't seem to care about if their data is protected, or how it is protected....even though those same individuals will say they care quite a bit.

So I have to ask...if people don't really care about the loss of their personal data (which we can tell from what they do, versus what they say), and the impact they experience from any actual loss is really pretty nominal (often more an inconvenience, and results in lost time, not lost assets), why do we have such a strict regulatory mandate in many jurisdictions? Why are there so many laws and standards in place to protect something that doesn't seem to really have much value?

It might be heresy to ask, but...are we at the point where "MORE SECURITY!!" is not actually the best approach, in terms of the interests of individuals? Does the cost of adding more and more protection to personal data raise the price of goods and services ultimately provided to individuals...and does that price increase go beyond what the average cost of a loss would be to each person?