Privately Bruce

One of my former students, the amazing Catherine Thiallier, was recently lucky enough to attend a fascinating event (one I wanted to go to, but was busy teaching, unfortunately): https://it-security-munich.net/event/talking-heads-bruce-schneier-alex-stamos/. She was also kind enough to write up a distillation of the event, and has agreed to let me share it here. So I will post her impressions, then offer my own comment on the topics discussed.

Full disclosure: I am a total Bruce Schneier fanboy. Even when I disagree with him. And I made some minor edits for translation/clarity.

“Hope you are well. I forgot to give you the update on the conference. Bruce Schneier is an amazing talker.. Unfortunately he had to leave early and couldn’t stay very long but still made a speech and answered at least 10 questions.

Content of his speech, basically (and I guess those are the topics he develops in his books):

- The Internet was not conceived to be secure

- We always think confidentiality is the most important of the CIA triad, but now it’s actually availability and integrity (I’m more concerned the brakes would fail on my car or somebody at the hospital changes my blood type than somebody reading information about me).

- Three big IT security fails:  patching (example: it’s not easily possible for your router at home). Authentication  (no scalable solution for IoT). And supply chain processes. And the keys are policies/regulations; can't blame the users for choosing the cheapest option.  The free market won’t reward the secure solution, it has to come from government/laws. He made the analogy that when he gets on a plane he doesn’t check the engines and go to the pilot and ask him to show him his degrees. Or a restaurant, he doesn’t check the kitchen. We are trusting the law. Key is trust in applied law.

That was in about the content of his speech. Then the second planned guest was sick, and instead they invited an amazing woman, Katie Moussouris, who created the bug bounty program at Microsoft, and gave a speech about vulnerability disclosure /pentest /bug bounties. She has an amazing and very inspiring personality. As (shame on me, but am still new in the field!) I never heard the term Bug Bounty I was a bit lost - now I know - so all in one, great event!!

I found some other stuff I noted and forgot to write: 

Last century the fundamental question was how much of my life is governed by the market and how much by the state. Now the question is how much of my life is governed by technology and with which rules.

Question from the audience: Whose fault is this? => the market rewards insecure solutions  (cheaper). Companies want security as long as they can spy on you.  But security OR surveillance by design=> can't be both!

Ok now I think that's all. I definitely want to read one of his books. Have you read "click here to kill everybody"? I loove the title already :)”

Fantastic stuff! And I’m very jealous she got to attend.

My quick comments on Bruce’s points:

- I think legislation/regulation are the worst mechanisms for enforcement...especially when it comes to IT. Mainly because government (especially the American government) has demonstrated, time and again, that it doesn’t care about privacy, and knows nothing about IT. And Bruce has been the one demonstrating this, to great effect (and some hilarity). Just one example: https://www.schneier.com/crypto-gram/archives/2003/0815.html#6. He’s also been one of the most vocal critics of TSA...which is a perfect example of what happens when you let the government do security.

- The market doesn’t reward secure solutions because the market doesn’t care. People say they care about their personal privacy...but then go ahead and refute the statement every chance they get. People will respond on a survey that they value security of their private data...but those same people will give up their passwords for a candy bar, and won’t stop shopping at a vendor who has demonstrated a complete lack of ability to secure that information.

- When the market does care, there are excellent mechanisms for making it much, much more efficient and effective than a government solution. Let’s use the food example Bruce mentioned: the US federal government is in charge of making sure our food is safe...and American citizens still die from eating poison lettuce. Conversely, a private, cooperative, voluntary, market-drive solution has been implemented (and paid for) by a small, interested group of citizens: the kosher inspection and markings on American food products. Without any government involvement at all, the American Jewish community has installed their own quality control/assurance process into food production, and it is trusted (as far as I know) by every member of that community (and also serves to protect nonmembers, who get the benefit of food inspection/approval even though they don’t pay for it, and probably don’t know it exists). When people do start to care about their information, they will demonstrate their caring by withholding money from entities that violate privacy, and reward entities that offer privacy protection. Until then, adding a law to protect something people only claim to care about is worse than useless: it’s harmful and dangerous and expensive.

 

But I’m still a giant fanboy. heh.

Thanks again, Catherine, for excellent summary of what seems to have been a great event!

 

Need-to-Know versus Least Privilege

Got into a great discussion in a recent class, about the difference between these two security concepts (indeed, some of the class thought there wasn’t even a difference). At the time, I was caught up in the conversation, and couldn’t construct a good example to clarify the distinction. But after a bit of time, I’ve formulated one that should do the job:

 

Alice and Bob are drivers/bodyguards for senior managers in the company they work for.

 

Both Alice and Bob have the correct permissions and access mechanisms to perform their duties (which are the same, for their respective managers): each driver has a passcard that will allow them access to the secure garage where the vehicles are stored; they each have authorization to check out keys for the vehicles used to transport the managers. Their passcards do not, however, allow them into other parts of the company property-- they can’t, for instance, use their passcards to enter the Research department, or Accounting. This is an example of least privilege-- they are only given a set of permissions necessary to perform their duties.

 

However, when Alice and Bob arrive at the garage to check out their respective vehicles, they are not given the route and destination of other managers-- only the manager they are driving/protecting that day. Alice cannot see the destination of Bob’s vehicle, and Bob can’t see Alice’s destination. That information is exclusively given only to the people involved in coordinating the movements of the specific senior managers, thus limiting the number of people who might compromise the security of that information. This is an example of need to know-- Bob does not need to know the destination of Alice’s vehicle.

 

To put it in general terms, least privilege usually has to do with clearances and roles, while need to know is typically based on which projects or customers a person is working on/for, and allows for compartmentalization.

 

While I may have done a disservice to the class in not coming up with this analogy earlier, I’m hoping it serves the purpose for anyone else confused about these concepts.

 

 

HIPAA or Giraffe?

            When we (in the INFOSEC community) think of HIPAA, we usually think of the security implications and requirements. That’s our perspective, and what’s important to us, as practitioners. The law, on the other hand, has very little to with security-- most of the security-related content is wedged into the law’s Section 264, which basically tasks the head of the US Health and Human Services Administration to go figure out what protections should be put on medical information for individual patients. When the law is copied from the Web to MSWord, Section 264 comes to about a page of text, while the entire law is 178 pages.

You can find it here:

https://www.govinfo.gov/content/pkg/PLAW-104publ191/html/PLAW-104publ191.htm

 

            The weird thing, from where I sit, is that this law, which is purported to enhance the security of patient data, does pretty much the opposite. The law encourages (just short of a mandate) putting all American medical data into an electronic format, according to a template that the law also tasks the federal government with creating. My question: what is more secure-- paper records or electronic records?

 

            - Paper records can be stolen, modified, or destroyed, assuming an attacker gain get physical access to them. Major or minor disasters, such as fire and flood, could likewise destroy/damage physical records. However, copying these records, or modifying them in a quasi-undetectable way, is a cumbersome, time-consuming process: the attacker would have to capture the data with the use of a device (a camera or photocopier), usually page-by-page, and typically with a light source present. Even stealing paper records is somewhat difficult: paper files are fairly heavy, and quite unwieldy...stealing the records of, say, 1,000 patients (if each record is 100 pages long, which is actually a fairly small patient record), would be impossible for a single attacker, without using a tool like a forklift or handcart, and making several trips between where the records are stored and where the attacker wants to transport them (say, a vehicle).

 

            - Electronic records are easy to steal in bulk: a file or a thousand files or a million files can be moved, erased, copied without much difference in effort (granted, there may be a considerable difference in the time required to copy a million files and a single file, but compared to the time it would take to copy a million hardcopy files, this duration is negligible). Modifying a single file, or a hundred files, or a thousand, through the use of an automated script, in an otherwise-undetectable manner, would be much easier than trying to physically change a paper record. And electronic theft/destruction/modification can be done remotely: the attacker never needs to have physical access to the data in order to harm it. Electronic media (drives, tapes, etc.) are still susceptible to physical disasters like fire and flooding.

 

            With that said, an electronic record can be duplicated easily for archival (the same quality that makes it easy to steal also makes it easy to make backups in order to multiple copies that might be stored in different locations, and thus survive a disaster). An electronic record can be readily encrypted/decrypted by the owner; this would be just about impossible to do with paper records, in any reasonable way. And electronic data store, and each individual file, can be subject to logging and monitoring in a way that is impossible for hardcopy: a piece of paper cannot tell its owner how many eyeballs have seen it.

 

            I’m not really sure the answer to every security issue is “put it on a computer.” Conversely, I’m not a Luddite, either: I don’t think we should stick to archaic modes of data processing and communication just to avoid security issues.

            However, I think this law is a perfect example of how attempting to codify security through a given practice/measure can, instead, harm that very same goal. I don’t think there was much of a market for ransoming patient data before HIPAA, and I don’t think hospitals and doctors had much of an IT security budget before data was converted to electronic form (which, again, is not always the best policy: the 414s hacking crew demonstrated all the way back in the 1980s that medical equipment/services could be harmed remotely). But there are also unintended consequences of efforts such as the HIPAA legislation; one of these is that the cost of medical care in the United States continues to escalate, and the cost of compliance for laws such as this make it harder for new, innovative, small providers to enter the market and compete.

            So was this law useful for patients? Or did it harm them -from both a security perspective and access to healthcare- overall?

            I don’t have much info about it. Glad to hear whatever anyone else has to contribute, in the comments or in private messages.

 

 

 

 

 

 

 

 

 

Stegotrojansaurus

IF YOU ARE STUDYING FOR A CERTIFICATION EXAM, STOP READING-- THIS IS PURELY ACADEMIC AND WILL ONLY CONFUSE YOU

 

 

When I explain steganography to my students, I usually say, “It’s a message in one medium put inside another medium-- more like encoding than cryptography.” I stress that steganography is NOT crypto, even though the topics always seem to be taught coincidentally. I often use the example of Jeremiah Denton, who, as a prisoner of war, blinked the word “torture” in Morse code while being forced to make propaganda films against his country (https://www.youtube.com/watch?v=rufnWLVQcKg). I talk about putting a text message inside the code for a .jpg, and so forth.

 

As almost always happens, a student in a recent class taught me something I did not know before. But this case was exceptional, because it was something that had simply never occurred to me at all, and I don’t think I’ve ever heard anyone else suggest it:

 

Trojan horse applications are a form of steganography.

 

It’s kind of perfect. The malware, which is a message of one medium (the executable), is hidden inside a message of another medium, such as a photo or movie or text document or whatever (sometimes-- there are examples of Trojans where both the malware and its carrier are executables, or there is just one executable with two aspects: one desirable to the victim, and one not).

 

This is purely a philosophical point: it doesn’t mean anything earth-shattering in the world of INFOSEC. But I love it when a student has a completely new take on some fairly old ideas. Blew me away. Good job, Ann-Kathrin.

'Membering Mnemonics

I often learn quite a bit from the students I’m supposed to be teaching (mainly because they’re invariably smarter than I am). I also love mnemonics— those mental tricks and reminders that help you recall concepts and bits of information.

This past week, one of my classes was going over the OSI and TCP/IP networking models; I had a great mnemonic for OSI (Please Do Not Teach Security People Anything)…but I confessed that I have no way of remembering the TCP/IP Model.

One of the participants said she had something, but was reluctant to share. We coaxed it out of her. I share it, slightly modified, with you now:

“Not In The….Arctic.”

I will never again forget the names of the four layers of the TCP/IP networking model. Though I may try.

Thanks, Lauri— you’re a good teacher.

Wandering Security

For the first time ever, I ran across a hotel business center (desktop PC and printer) that had the USB ports physically blocked out. I find that interesting only because I’ve often considered how easy it would be to introduce malware/whatever into a business center (and often hoped those machines are airgapped from the hotel’s production environment).

Of course, this was at a time when I needed to print something off a USB stick, instead of, say, an email I could access through a Web browser.

I found out that unplugging the keyboard would, yes, open a viable USB port that wasn’t limited to just human interface devices. Sure, I was limited to inputs from the mouse in order to manipulate the file (because, well— no keyboard), but it seems that someone put at least some good thought into locking down that system, but then left a giant pathway right through their control policy.

Not sure what the workaround would be, short of putting Super Glue on all the keyboard/monitor USB connections for every PC in every property in that hotel chain. Or going with thin clients that have peripherals that are hardwired and not connected by USB (come to think of it, with a very limited target functionality, why does the business center need full PCs, anyway?).

Anyone ever given any thought to this?

The Flatline Cohesion Principle

This week’s CCSP class pointed out that one of the multiple-choice answers in my book of practice tests included the term “flatline cohesion principle.” They asked me what it meant, and I had to admit that I had no clue…maybe it meant that I was drinking too much scotch when I wrote the book?

Turns out, it was a nonsense term I invented as a distractor from the correct answer to that specific question. So we discussed the idea, and decided we had to come up with a definition for the completely blank term.

The consensus was that it should mean: “When you write a book of practice tests that may or may not have complicated, misleading questions in it, then use your class to crowdsource how worthy the material is for study purposes.”

I do like this. But I am very open to alternative uses for the term. If someone comes up with something better, put it in the Comments section, and I’ll send you a free copy of the book. I will be the sole judge of what constitutes “better.”

In the meantime: everyone should follow the flatline cohesion principle.

And many, many thanks to this week’s CCSP class participants: y’all were awesome, and I think you’re all gonna to conquer the exam.

Sharp Security

In the US, possession of a switchblade is a federal offense, but butterfly knives are as legal as rye bread.

In Germany, it’s just the opposite. About switchblades and butterfly knives, I mean— bread’s not illegal there, as far as I know.

Funny thing: neither place, far as I can tell, suffers from crime waves using either instrument.

I am sure someone smarter than me could figure out some sort of meaning in this.

CCSP Test Feedback

From a recent student:

”I found it to be quite challenging, mostly because more than a few of the questions and / or answers were so tersely worded that it was very hard to determine what was being asked.  I also ran into some test questions on concepts that weren’t covered in the course material, or if they were,  it was in passing and didn’t really justify the attention it got on the exam.  However, I passed, so it’s all behind me now.  :^) “

Amazon Data Leaks

Meh. When I first saw a notice that contained the same words as the headline on this entry, I thought, “well, here begins the end of cloud managed services.”

But then I read an article [like this one] and saw that it was really Amazon employees taking bribes from retailers to remove negative reviews.

So…”help me sell more sandals,” is a far cry from, “sell me my competitor’s data.” I would imagine Amazon’s main concern is that the bribes are less expensive than what Amazon could otherwise charge for this same service…and go directly to the employees, instead of to Amazon.

RECENT CISSP CAT EXAM NOTES

Got an email from a recent former student...the kind of email I really enjoy:

"Hi Ben,I wanted to let you know that I took my test yesterday and passed at 100 questions :D

 

- After our class, I studied using mostly the Boson practice exams (reading the explanation for EVERY question, failed or passed).

- After that I bounced back and forth between Boson random exams, the updated Sunflower guide, and the 11th Hour book (which was great for last-minute cramming, the last 2 days leading up to the exam).  I also watched Kelly Handerhan's CISSP prep videos at Cybrary prior to our class, and various other YouTube videos (Larry Gleenblatt's CISSP exam tips were helpful) here and there.

- I studied for about 2-3 hours a day, every day, for 4 weeks total (taking 1.5 weeks off for vacation).

- I was 100% certain that I was going to fail while taking the exam.  I was so sure of it that I considered just picking the same letter answer over and over to end the test and GTFO at around 80 questions.  Glad I didn't.

- I took my time reading and re-reading each question and answer so many times that I thought I was going to shoot myself in the foot with the time of the exam.   I had about 30min left at 100 questions.

 

Thank you for all of your wisdom and guidance during our class.  I feel that it helped a lot and set a good expectation for the exam and framework of where to study. It helped me realize my weak areas so I knew where to focus.  Although, the test has a funny way of making you feel that you're completely unprepared while you're actually taking it. :)"

Physical Badsec

If your physical security process involves controlled items, make sure you train your staff not to hand a stack of the controlled items to unauthorized personnel during the procedure...else someone could pilfer one or two, and use them for all sorts of nefarious purposes.

 

In totally unrelated news, if someone wants to smuggle contraband/small children/explosives aboard a cruise ship, drop me a line in the Comments section.

carnival luggage tag.jpg