My Impressions of RSA 2019

Multi mega edge cloud defense time perimeter virtual securi secura securahhh trans service auth train policy force intel synch remote filter.

What staggered me more than the products and services were the hackneyed and trite methods for seducing tradeshow passersby; the food, toys, barkers, tchotchkes, crap, celebrities, clothes, stickers, pins, bric-a-brac, come-ons, and flat-out bribes were overwhelming and underwhelming at the same time. There must (MUST) be a better way to get information out to the people who need and want it.

My favorite booth? Two people sitting at an empty desk, with no posters, background, or other frippery. I got their marketing info and card, and will post the name of the company when I get it out of my luggage.

…follow up…this was the company I was talking about: https://www.openiam.com/

I don’t know what they do. I don’t know if their product/service is any good. But I appreciated the approach enough that I’m going to find out.

Million-Dollar App Idea

 

You can have this idea, for free. If you make it, I wanna get a free account, though.

 

Whenever someone calls my cell phone, unless they’re in my Contacts list or otherwise approved, they get a recording that says, “The person you’re trying to reach is rejecting automated calls. To prove you’re a human, press these three numbers now:”....then it speaks three random numbers.

 

Because call spammers will quickly adapt to this new screening technique, the numbers will be read in three wildly different voices, pulled from hundreds of different voiceprints, with varying pitch, speed, and intonation, perhaps with different sounds/music playing in the background of each...like an audio CAPTCHA.

 

It will be slightly annoying for people trying to call me for the first time, but when I’m done with the call, the app should give me the option to add them to the allowed callers list.

 

Somebody smart go make this.

Privately Bruce

One of my former students, the amazing Catherine Thiallier, was recently lucky enough to attend a fascinating event (one I wanted to go to, but was busy teaching, unfortunately): https://it-security-munich.net/event/talking-heads-bruce-schneier-alex-stamos/. She was also kind enough to write up a distillation of the event, and has agreed to let me share it here. So I will post her impressions, then offer my own comment on the topics discussed.

Full disclosure: I am a total Bruce Schneier fanboy. Even when I disagree with him. And I made some minor edits for translation/clarity.

“Hope you are well. I forgot to give you the update on the conference. Bruce Schneier is an amazing talker.. Unfortunately he had to leave early and couldn’t stay very long but still made a speech and answered at least 10 questions.

Content of his speech, basically (and I guess those are the topics he develops in his books):

- The Internet was not conceived to be secure

- We always think confidentiality is the most important of the CIA triad, but now it’s actually availability and integrity (I’m more concerned the brakes would fail on my car or somebody at the hospital changes my blood type than somebody reading information about me).

- Three big IT security fails:  patching (example: it’s not easily possible for your router at home). Authentication  (no scalable solution for IoT). And supply chain processes. And the keys are policies/regulations; can't blame the users for choosing the cheapest option.  The free market won’t reward the secure solution, it has to come from government/laws. He made the analogy that when he gets on a plane he doesn’t check the engines and go to the pilot and ask him to show him his degrees. Or a restaurant, he doesn’t check the kitchen. We are trusting the law. Key is trust in applied law.

That was in about the content of his speech. Then the second planned guest was sick, and instead they invited an amazing woman, Katie Moussouris, who created the bug bounty program at Microsoft, and gave a speech about vulnerability disclosure /pentest /bug bounties. She has an amazing and very inspiring personality. As (shame on me, but am still new in the field!) I never heard the term Bug Bounty I was a bit lost - now I know - so all in one, great event!!

I found some other stuff I noted and forgot to write: 

Last century the fundamental question was how much of my life is governed by the market and how much by the state. Now the question is how much of my life is governed by technology and with which rules.

Question from the audience: Whose fault is this? => the market rewards insecure solutions  (cheaper). Companies want security as long as they can spy on you.  But security OR surveillance by design=> can't be both!

Ok now I think that's all. I definitely want to read one of his books. Have you read "click here to kill everybody"? I loove the title already :)”

Fantastic stuff! And I’m very jealous she got to attend.

My quick comments on Bruce’s points:

- I think legislation/regulation are the worst mechanisms for enforcement...especially when it comes to IT. Mainly because government (especially the American government) has demonstrated, time and again, that it doesn’t care about privacy, and knows nothing about IT. And Bruce has been the one demonstrating this, to great effect (and some hilarity). Just one example: https://www.schneier.com/crypto-gram/archives/2003/0815.html#6. He’s also been one of the most vocal critics of TSA...which is a perfect example of what happens when you let the government do security.

- The market doesn’t reward secure solutions because the market doesn’t care. People say they care about their personal privacy...but then go ahead and refute the statement every chance they get. People will respond on a survey that they value security of their private data...but those same people will give up their passwords for a candy bar, and won’t stop shopping at a vendor who has demonstrated a complete lack of ability to secure that information.

- When the market does care, there are excellent mechanisms for making it much, much more efficient and effective than a government solution. Let’s use the food example Bruce mentioned: the US federal government is in charge of making sure our food is safe...and American citizens still die from eating poison lettuce. Conversely, a private, cooperative, voluntary, market-drive solution has been implemented (and paid for) by a small, interested group of citizens: the kosher inspection and markings on American food products. Without any government involvement at all, the American Jewish community has installed their own quality control/assurance process into food production, and it is trusted (as far as I know) by every member of that community (and also serves to protect nonmembers, who get the benefit of food inspection/approval even though they don’t pay for it, and probably don’t know it exists). When people do start to care about their information, they will demonstrate their caring by withholding money from entities that violate privacy, and reward entities that offer privacy protection. Until then, adding a law to protect something people only claim to care about is worse than useless: it’s harmful and dangerous and expensive.

 

But I’m still a giant fanboy. heh.

Thanks again, Catherine, for excellent summary of what seems to have been a great event!

 

Need-to-Know versus Least Privilege

Got into a great discussion in a recent class, about the difference between these two security concepts (indeed, some of the class thought there wasn’t even a difference). At the time, I was caught up in the conversation, and couldn’t construct a good example to clarify the distinction. But after a bit of time, I’ve formulated one that should do the job:

 

Alice and Bob are drivers/bodyguards for senior managers in the company they work for.

 

Both Alice and Bob have the correct permissions and access mechanisms to perform their duties (which are the same, for their respective managers): each driver has a passcard that will allow them access to the secure garage where the vehicles are stored; they each have authorization to check out keys for the vehicles used to transport the managers. Their passcards do not, however, allow them into other parts of the company property-- they can’t, for instance, use their passcards to enter the Research department, or Accounting. This is an example of least privilege-- they are only given a set of permissions necessary to perform their duties.

 

However, when Alice and Bob arrive at the garage to check out their respective vehicles, they are not given the route and destination of other managers-- only the manager they are driving/protecting that day. Alice cannot see the destination of Bob’s vehicle, and Bob can’t see Alice’s destination. That information is exclusively given only to the people involved in coordinating the movements of the specific senior managers, thus limiting the number of people who might compromise the security of that information. This is an example of need to know-- Bob does not need to know the destination of Alice’s vehicle.

 

To put it in general terms, least privilege usually has to do with clearances and roles, while need to know is typically based on which projects or customers a person is working on/for, and allows for compartmentalization.

 

While I may have done a disservice to the class in not coming up with this analogy earlier, I’m hoping it serves the purpose for anyone else confused about these concepts.

 

 

HIPAA or Giraffe?

            When we (in the INFOSEC community) think of HIPAA, we usually think of the security implications and requirements. That’s our perspective, and what’s important to us, as practitioners. The law, on the other hand, has very little to with security-- most of the security-related content is wedged into the law’s Section 264, which basically tasks the head of the US Health and Human Services Administration to go figure out what protections should be put on medical information for individual patients. When the law is copied from the Web to MSWord, Section 264 comes to about a page of text, while the entire law is 178 pages.

You can find it here:

https://www.govinfo.gov/content/pkg/PLAW-104publ191/html/PLAW-104publ191.htm

 

            The weird thing, from where I sit, is that this law, which is purported to enhance the security of patient data, does pretty much the opposite. The law encourages (just short of a mandate) putting all American medical data into an electronic format, according to a template that the law also tasks the federal government with creating. My question: what is more secure-- paper records or electronic records?

 

            - Paper records can be stolen, modified, or destroyed, assuming an attacker gain get physical access to them. Major or minor disasters, such as fire and flood, could likewise destroy/damage physical records. However, copying these records, or modifying them in a quasi-undetectable way, is a cumbersome, time-consuming process: the attacker would have to capture the data with the use of a device (a camera or photocopier), usually page-by-page, and typically with a light source present. Even stealing paper records is somewhat difficult: paper files are fairly heavy, and quite unwieldy...stealing the records of, say, 1,000 patients (if each record is 100 pages long, which is actually a fairly small patient record), would be impossible for a single attacker, without using a tool like a forklift or handcart, and making several trips between where the records are stored and where the attacker wants to transport them (say, a vehicle).

 

            - Electronic records are easy to steal in bulk: a file or a thousand files or a million files can be moved, erased, copied without much difference in effort (granted, there may be a considerable difference in the time required to copy a million files and a single file, but compared to the time it would take to copy a million hardcopy files, this duration is negligible). Modifying a single file, or a hundred files, or a thousand, through the use of an automated script, in an otherwise-undetectable manner, would be much easier than trying to physically change a paper record. And electronic theft/destruction/modification can be done remotely: the attacker never needs to have physical access to the data in order to harm it. Electronic media (drives, tapes, etc.) are still susceptible to physical disasters like fire and flooding.

 

            With that said, an electronic record can be duplicated easily for archival (the same quality that makes it easy to steal also makes it easy to make backups in order to multiple copies that might be stored in different locations, and thus survive a disaster). An electronic record can be readily encrypted/decrypted by the owner; this would be just about impossible to do with paper records, in any reasonable way. And electronic data store, and each individual file, can be subject to logging and monitoring in a way that is impossible for hardcopy: a piece of paper cannot tell its owner how many eyeballs have seen it.

 

            I’m not really sure the answer to every security issue is “put it on a computer.” Conversely, I’m not a Luddite, either: I don’t think we should stick to archaic modes of data processing and communication just to avoid security issues.

            However, I think this law is a perfect example of how attempting to codify security through a given practice/measure can, instead, harm that very same goal. I don’t think there was much of a market for ransoming patient data before HIPAA, and I don’t think hospitals and doctors had much of an IT security budget before data was converted to electronic form (which, again, is not always the best policy: the 414s hacking crew demonstrated all the way back in the 1980s that medical equipment/services could be harmed remotely). But there are also unintended consequences of efforts such as the HIPAA legislation; one of these is that the cost of medical care in the United States continues to escalate, and the cost of compliance for laws such as this make it harder for new, innovative, small providers to enter the market and compete.

            So was this law useful for patients? Or did it harm them -from both a security perspective and access to healthcare- overall?

            I don’t have much info about it. Glad to hear whatever anyone else has to contribute, in the comments or in private messages.

 

 

 

 

 

 

 

 

 

Stegotrojansaurus

IF YOU ARE STUDYING FOR A CERTIFICATION EXAM, STOP READING-- THIS IS PURELY ACADEMIC AND WILL ONLY CONFUSE YOU

 

 

When I explain steganography to my students, I usually say, “It’s a message in one medium put inside another medium-- more like encoding than cryptography.” I stress that steganography is NOT crypto, even though the topics always seem to be taught coincidentally. I often use the example of Jeremiah Denton, who, as a prisoner of war, blinked the word “torture” in Morse code while being forced to make propaganda films against his country (https://www.youtube.com/watch?v=rufnWLVQcKg). I talk about putting a text message inside the code for a .jpg, and so forth.

 

As almost always happens, a student in a recent class taught me something I did not know before. But this case was exceptional, because it was something that had simply never occurred to me at all, and I don’t think I’ve ever heard anyone else suggest it:

 

Trojan horse applications are a form of steganography.

 

It’s kind of perfect. The malware, which is a message of one medium (the executable), is hidden inside a message of another medium, such as a photo or movie or text document or whatever (sometimes-- there are examples of Trojans where both the malware and its carrier are executables, or there is just one executable with two aspects: one desirable to the victim, and one not).

 

This is purely a philosophical point: it doesn’t mean anything earth-shattering in the world of INFOSEC. But I love it when a student has a completely new take on some fairly old ideas. Blew me away. Good job, Ann-Kathrin.

Amazon Data Leaks

Meh. When I first saw a notice that contained the same words as the headline on this entry, I thought, “well, here begins the end of cloud managed services.”

But then I read an article [like this one] and saw that it was really Amazon employees taking bribes from retailers to remove negative reviews.

So…”help me sell more sandals,” is a far cry from, “sell me my competitor’s data.” I would imagine Amazon’s main concern is that the bribes are less expensive than what Amazon could otherwise charge for this same service…and go directly to the employees, instead of to Amazon.

RECENT CISSP CAT EXAM NOTES

Got an email from a recent former student...the kind of email I really enjoy:

"Hi Ben,I wanted to let you know that I took my test yesterday and passed at 100 questions :D

 

- After our class, I studied using mostly the Boson practice exams (reading the explanation for EVERY question, failed or passed).

- After that I bounced back and forth between Boson random exams, the updated Sunflower guide, and the 11th Hour book (which was great for last-minute cramming, the last 2 days leading up to the exam).  I also watched Kelly Handerhan's CISSP prep videos at Cybrary prior to our class, and various other YouTube videos (Larry Gleenblatt's CISSP exam tips were helpful) here and there.

- I studied for about 2-3 hours a day, every day, for 4 weeks total (taking 1.5 weeks off for vacation).

- I was 100% certain that I was going to fail while taking the exam.  I was so sure of it that I considered just picking the same letter answer over and over to end the test and GTFO at around 80 questions.  Glad I didn't.

- I took my time reading and re-reading each question and answer so many times that I thought I was going to shoot myself in the foot with the time of the exam.   I had about 30min left at 100 questions.

 

Thank you for all of your wisdom and guidance during our class.  I feel that it helped a lot and set a good expectation for the exam and framework of where to study. It helped me realize my weak areas so I knew where to focus.  Although, the test has a funny way of making you feel that you're completely unprepared while you're actually taking it. :)"

Physical Badsec

If your physical security process involves controlled items, make sure you train your staff not to hand a stack of the controlled items to unauthorized personnel during the procedure...else someone could pilfer one or two, and use them for all sorts of nefarious purposes.

 

In totally unrelated news, if someone wants to smuggle contraband/small children/explosives aboard a cruise ship, drop me a line in the Comments section.

carnival luggage tag.jpg

Bad Control

The vendor has a policy: checks that are numbered less than 1500 are not accepted.

The clerk tells me to just ask my bank to put a higher number on my checks and send me some new ones.

The control was put in place years ago, to reduce the possibility of fraud from an outdated attack method (does anyone even commit check fraud anymore?). The vendor obviously knows the control is easy to overcome, and only actually prevents legitimate transactions.

This is not a good control.

First person to guess the vendor correctly gets a free copy, your choice, of one of my books. Put your guess in the Comments to this post.

Letting Off Steam

            Valve is a company that makes computer/video games; they also run the Steam game distribution platform, which is an online store/licensing portal that sells games made by other companies. This week, Valve announced it would no longer curate titles on Steam, and allow any game producer to host any title in the store, for sale to the public (with the notable exceptions of games that contain illegal content and those are “straight up trolling”). [You can read the announcement at: https://store.steampowered.com/news/]

            This is fascinating, and definitely a reaction to recent public attention focused on one game that Valve took off the Steam platform (and simultaneously banned the game producer), a first-person shooter that simulated mass murder at a school, called Active Shooter. While I’m not sure how that game would run afoul of this new policy (is Active Shooter straight up trolling or illegal content? if neither, why is it still banned?), it seems very interesting to me that Valve chose to modify their approach to hosting titles as a result.

            I am a gamer. And I am interested in maximizing free speech. Valve’s decision therefore delights me greatly. Opponents of Valve’s decision (including writers from disparate sources, such as game review websites and Forbes) kind of puzzle me, and somewhat infuriate me. Their arguments seem to constitute two lines of thought: 

1) By allowing anything, Valve is taking a political stance that endorses everything.

2) By allowing anything, the online store will be swamped with material customers don’t want, such as games that include topics that bother some people, including racial bias, violence, and sexuality. Customers won’t be able to find what they want, because of all the material they don’t want; this will be particularly disturbing to sensitive customers who are offended by those topics.

            Trying to make sense of these criticisms, I draw these two conclusions:

1) I can’t possibly understand why the political stance of “allowing everything” is ugly or wrong: the entire purpose of having a free society (much less a free online store) is so that conflicting ideas and perspectives are allowed to exist (and maybe flourish)....even if most of us don’t particularly like them. Having freedom so that we can all like the same things isn’t freedom, it’s a sheep farm.

2) I don’t think the people saying this A) are gamers and B) understand how the Internet works. To explain in detail:

            A) Gaming is a participatory mode of entertainment unlike any other form of mass media: books, movies, music are all projections of the creators (writers, directors, musicians, singers) at the audience in a unilateral communication; the audience does not communicate with the artist or influence the art. (The notable exception: choose-your-own-adventure books, where outcomes are decided by readers.) In gaming, the player must take part in the activity in order to determine progress/outcome. The artist(s) can present content, but the game doesn’t actually do anything unless the player is utilizing it-- a game without a player is a title screen, and no different from wall art. In terms of recreation, this makes gaming more akin to, say, sports, than literature (with the obvious advantage that gaming does not favor only those with the biological birthright biases of ability, size, speed, etc.).

            So in order to be “affected” by a game (no matter how sensitive you are), you have to actually play the game...which is a conscious choice, and includes the option of stopping at any time. You, the player (or potential player), have full control over whether any selection from that medium, any game, affects you, personally. You have no control over whether someone else can play it or if they are affected, and nobody else has control over whether you play it or are affected. You. You alone are in charge. Compare this to, say, the television turned to full volume in the airport waiting areas: I have no choice, as an audience member, to voluntarily not participate: if I want to isolate myself from that communication, I have to take active steps (using headphones/earplugs, purposefully not looking in that direction) to insulate myself from the message.

            Gamers understand this, and relish it-- it is one of the great joys of games. There are many thousands of games I have never played, nor ever will-- those do not affect me in any way, much the same way the millions of sandwiches eaten by other people only affect those people, and not me. There is food on this planet I do not like, and would probably cause me intestinal distress: I don’t have to eat that food, and can choose not to.

            Now, is it possible that the title of a particular game offends someone, and just seeing it on a screen bothers someone? Or that hundreds of these titles, listed together, scrolling across a screen, might be distressing to a viewer? Like, if every title in a list of hundreds contained racial/religious epithets, or swear words?

            Maybe that would be bothersome to someone...or maybe it would inure that person to those words, causing those words to lose power. But that’s not really here nor there, because we go to point....

            ....B) The Internet is the best shopping market ever devised. I can find almost anything I could possibly want, in a moment, without the trouble of leaving my couch. Steam makes full use of Internet possibilities, allowing a shopper to search for particular terms (or filter out particular terms), see only titles that are preferred, or limit content in any number of ways. So not only does a gamer not have to play a particular game (or genre of games), but the gamer does not even have to see a given title or type of title.

            Those that complain Steam will be overwhelmed with undesirable games, making it difficult for shoppers to find the games they (the shoppers) like, don’t really want to shop. Because that’s what shopping is: making a choice from among options. The complainers want someone else to make the choice for them (and for all gamers) by limiting the possible options. I find that sad; when an adult wants to forego the power of their own choices, they limit themselves (and when they want to impose it on everyone, they’re limiting all of us).

            Might Steam get inundated with cheap, callous, crass games made by halfhearted or greedy developers less concerned with quality gameplaying experiences than turning a quick buck? Might that make it harder for a shopper to find the gems hidden in piles of dross? Possibly. But that same description could be used for major production houses right now, easily. And sifting through a bunch of crap to find a treasure is one of the great joys of one of my favorite shopping formats: the flea market. I have found some items of great value (both relative and financial) for amazing prices at flea markets...and I have spent hours in flea markets where I’ve seen nothing but crap and not made a single purpose. Did the latter experience harm me in any way? You could argue I lost the value of those hours, but that would be predicated on the assumption I didn’t receive enjoyment and entertainment value from those hours

            I assure you, I did.

            Finally, just to offer a couple thoughts on the public outrage over the specific game that started the whole conversation: Active Shooter. I am not sure why the idea of a simulation that mimics a tragedy, or where the player can pretend to be an awful person, or where entertainment is derived from violence is something to revile. I and my friends have pretended to be Nazis, done faux atrocities, and taken pleasure in murder for decades...and those were just board/tabletop games: Axis and Allies, Dungeons and Dragons, and Clue. Oddly, it has never meant that I actually want to invade Poland, slaughter hobgoblinoid people, or would take delight at a dinner party in which someone was bludgeoned to death with a heavy plumbing tool.