More CISSP Feedback

A former student offers this insight:

“ I just took the test this afternoon. Ended up with 101 questions, in  just under 95 minutes, and I passed (unless they decide they need to do  "psychometric" (lie detector?) or "forensic" evaluation). At that speed,  even if I had gotten the maximum number of questions I would have been  O.K. -- from what I've seen, many people report finishing with time to spare, so I would recommend not rushing. 

My experience was:  Lots of "BEST" and "MOST" questions. Definitely not a test to take just  based on knowing facts by rote. I did guess on some answers, but only  when I could eliminate some of the responses: and I found that often at  least one response would not make sense. I also tried to follow advice I  saw to "read the question, read the answers, and then read the question  again", since during practice tests I often picked the opposite of the  answer I knew to be right. 

I studied from the Chapple book, and the (ISC)2 flash cards, and by  taking lots of tests. For tests I had the companion book (which seemed  closest to the real test), and CCCure tests (which too often revealed  the answer in the question: but if taken with "Pro" mode and fill-in-the  blank answers was still useful (and gave easier statistics on which areas I needed further study in). It was important each time to go back  and understand my wrong answers -- that's where about 1/3 of my learning  happened. 

A note on CCCure tests & fill in the blank: unless you type the exact phrase in, it will count it as wrong, so review the results before you  decide how well you did. “

Great stuff!

Enhance Your Inner Luddite

So....I still run Win7. Mainly because I am a curmudgeon who refuses to evolve. A weird kind of nerd, I know-- not even a late adopter, I am a “maybe I’ll get around to adopting someday.” Luckily, I don’t feel the same way about dogs as I do tech.

 

Anyway, I often run into problems with the platform, and am stuck trying to puzzle out how to fix the thing by doing Web searches (as I am sure you do, too). I recently stumbled across this place, and it fixed one of my issues right up: www.sevenforums.com. Highly recommended.

CISSP Feedback

Got a detailed message from a former student I’d like to share…good insight for those studying the CISSP:

“I took your class last year (7/16-7/20), sat for the exam in late September, but did not passL. In that instance I know I incorrectly answered a drag and drop on controls and was surprised by the number of Cloud questions. That said, I spent the rest of the year and this spring internalizing all the material using the following resources:

 

·         CISSP All-in-One Exam Guide, 7th Edition

·         Official (ISC)2 Guide to the CISSP CBK ((ISC)2 Press)

·         (ISC)2 CISSP Information Systems Security Professional Official Study Guide

·         CISSP Official (ISC)2 Practice Tests

·         How To Pass Your INFOSEC Certification Test: A Guide To Passing The CISSP, CISA, CISM, Network+, Security+, and CCSP

·         CCSP (ISC)2 Certified Cloud Security Professional Official Study Guide

 

I read the (ISC)2 CISSP Information Systems Security Professional Official Study Guide, Reviewed and read CISSP All-in-One Exam Guide, 7th Edition, and took all the end of chapter tests as well as the CISSP Official (ISC)2 Practice Tests and also read your book, How To Pass Your INFOSEC Certification Test: A Guide To Passing The CISSP, CISA, CISM, Network+, Security+, and CCSP. In addition to all of that, I spent some time studying the Cloud Security basics contained in - CCSP (ISC)2 Certified Cloud Security Professional Official Study Guide and test questions.

 

I then sat and passed the exam on 4/30/2019. I was fed a completely different exam questions and again had quite a few Cloud based questions which I felt prepared for after studying the CCSP material. It’s been a long road, the test always looks deceptively easy, and I am pleased to have provisionally passed. I must say the practice exam application that came with the SHON Harris book most closely represented the actual exam experience. “

Hope that helps— seems like useful stuff!

Traveling Time

                We’ve all heard of the Butterfly Effect: one small action somewhere can be traced to larger effects somewhere else. The idea is that everything touches everything else, because we’re basically living in a soup of molecules (on the planet, anyway-- space is more like a thin broth, not because of absence of stuff --there is, in fact, a lot of stuff in space-- but because that stuff is spread out over a very large volume). Molecules bump into each other all the time, causing reactions to those bumps.          

                Right now, we can reconstruct causes from their effects at a macro level-- after two cars collide, we can looked at the smashed vehicles and determine which one struck the other, estimating the speed each were traveling, etc. But the ability to do that on the micro-micro-micro-micro level --the quantum level-- is only a matter of sufficient computing capacity. By capturing a model of what is happening right at this moment, it is possible to reach backwards into all the possible combinations of molecular and subatomic collisions, and tell what occurred prior, leading up to the moment.

                Not time travel-- there’s no way to go back and modify what occurred. But close-to-perfect time vision. The ability to see everything that happened prior to right now. Everywhere.

                The math is staggering. We’d have to account for every atom, worldwide. And there would be some variables, as space introduces externalities to the (not-closed) system-- dust and rock and energy is constantly bombarding the planet, in non-negligible amounts.

                But once we nail the formula...nothing that ever happened before would be unobservable.

                Forget the end to privacy-- that’s already underway. But the end of ignorance...the end of not knowing. The end of mystery. We will always know exactly what happened.

                It won’t be predictive-- human beings are the reason; free will is the chaos in the soup. We add too much randomness to the formula, because we act from motivations other than instinct or reason.

                But nothing that has already happened will be shrouded from anyone. We will all know what happened, everywhere, always. The applications and implications are vast-- the ways in which this will change how we behave, interact, and function are almost unimaginable and incalculable.

                Almost.

My Impressions of RSA 2019

Multi mega edge cloud defense time perimeter virtual securi secura securahhh trans service auth train policy force intel synch remote filter.

What staggered me more than the products and services were the hackneyed and trite methods for seducing tradeshow passersby; the food, toys, barkers, tchotchkes, crap, celebrities, clothes, stickers, pins, bric-a-brac, come-ons, and flat-out bribes were overwhelming and underwhelming at the same time. There must (MUST) be a better way to get information out to the people who need and want it.

My favorite booth? Two people sitting at an empty desk, with no posters, background, or other frippery. I got their marketing info and card, and will post the name of the company when I get it out of my luggage.

…follow up…this was the company I was talking about: https://www.openiam.com/

I don’t know what they do. I don’t know if their product/service is any good. But I appreciated the approach enough that I’m going to find out.

Million-Dollar App Idea

 

You can have this idea, for free. If you make it, I wanna get a free account, though.

 

Whenever someone calls my cell phone, unless they’re in my Contacts list or otherwise approved, they get a recording that says, “The person you’re trying to reach is rejecting automated calls. To prove you’re a human, press these three numbers now:”....then it speaks three random numbers.

 

Because call spammers will quickly adapt to this new screening technique, the numbers will be read in three wildly different voices, pulled from hundreds of different voiceprints, with varying pitch, speed, and intonation, perhaps with different sounds/music playing in the background of each...like an audio CAPTCHA.

 

It will be slightly annoying for people trying to call me for the first time, but when I’m done with the call, the app should give me the option to add them to the allowed callers list.

 

Somebody smart go make this.

Privately Bruce

One of my former students, the amazing Catherine Thiallier, was recently lucky enough to attend a fascinating event (one I wanted to go to, but was busy teaching, unfortunately): https://it-security-munich.net/event/talking-heads-bruce-schneier-alex-stamos/. She was also kind enough to write up a distillation of the event, and has agreed to let me share it here. So I will post her impressions, then offer my own comment on the topics discussed.

Full disclosure: I am a total Bruce Schneier fanboy. Even when I disagree with him. And I made some minor edits for translation/clarity.

“Hope you are well. I forgot to give you the update on the conference. Bruce Schneier is an amazing talker.. Unfortunately he had to leave early and couldn’t stay very long but still made a speech and answered at least 10 questions.

Content of his speech, basically (and I guess those are the topics he develops in his books):

- The Internet was not conceived to be secure

- We always think confidentiality is the most important of the CIA triad, but now it’s actually availability and integrity (I’m more concerned the brakes would fail on my car or somebody at the hospital changes my blood type than somebody reading information about me).

- Three big IT security fails:  patching (example: it’s not easily possible for your router at home). Authentication  (no scalable solution for IoT). And supply chain processes. And the keys are policies/regulations; can't blame the users for choosing the cheapest option.  The free market won’t reward the secure solution, it has to come from government/laws. He made the analogy that when he gets on a plane he doesn’t check the engines and go to the pilot and ask him to show him his degrees. Or a restaurant, he doesn’t check the kitchen. We are trusting the law. Key is trust in applied law.

That was in about the content of his speech. Then the second planned guest was sick, and instead they invited an amazing woman, Katie Moussouris, who created the bug bounty program at Microsoft, and gave a speech about vulnerability disclosure /pentest /bug bounties. She has an amazing and very inspiring personality. As (shame on me, but am still new in the field!) I never heard the term Bug Bounty I was a bit lost - now I know - so all in one, great event!!

I found some other stuff I noted and forgot to write: 

Last century the fundamental question was how much of my life is governed by the market and how much by the state. Now the question is how much of my life is governed by technology and with which rules.

Question from the audience: Whose fault is this? => the market rewards insecure solutions  (cheaper). Companies want security as long as they can spy on you.  But security OR surveillance by design=> can't be both!

Ok now I think that's all. I definitely want to read one of his books. Have you read "click here to kill everybody"? I loove the title already :)”

Fantastic stuff! And I’m very jealous she got to attend.

My quick comments on Bruce’s points:

- I think legislation/regulation are the worst mechanisms for enforcement...especially when it comes to IT. Mainly because government (especially the American government) has demonstrated, time and again, that it doesn’t care about privacy, and knows nothing about IT. And Bruce has been the one demonstrating this, to great effect (and some hilarity). Just one example: https://www.schneier.com/crypto-gram/archives/2003/0815.html#6. He’s also been one of the most vocal critics of TSA...which is a perfect example of what happens when you let the government do security.

- The market doesn’t reward secure solutions because the market doesn’t care. People say they care about their personal privacy...but then go ahead and refute the statement every chance they get. People will respond on a survey that they value security of their private data...but those same people will give up their passwords for a candy bar, and won’t stop shopping at a vendor who has demonstrated a complete lack of ability to secure that information.

- When the market does care, there are excellent mechanisms for making it much, much more efficient and effective than a government solution. Let’s use the food example Bruce mentioned: the US federal government is in charge of making sure our food is safe...and American citizens still die from eating poison lettuce. Conversely, a private, cooperative, voluntary, market-drive solution has been implemented (and paid for) by a small, interested group of citizens: the kosher inspection and markings on American food products. Without any government involvement at all, the American Jewish community has installed their own quality control/assurance process into food production, and it is trusted (as far as I know) by every member of that community (and also serves to protect nonmembers, who get the benefit of food inspection/approval even though they don’t pay for it, and probably don’t know it exists). When people do start to care about their information, they will demonstrate their caring by withholding money from entities that violate privacy, and reward entities that offer privacy protection. Until then, adding a law to protect something people only claim to care about is worse than useless: it’s harmful and dangerous and expensive.

 

But I’m still a giant fanboy. heh.

Thanks again, Catherine, for excellent summary of what seems to have been a great event!

 

Need-to-Know versus Least Privilege

Got into a great discussion in a recent class, about the difference between these two security concepts (indeed, some of the class thought there wasn’t even a difference). At the time, I was caught up in the conversation, and couldn’t construct a good example to clarify the distinction. But after a bit of time, I’ve formulated one that should do the job:

 

Alice and Bob are drivers/bodyguards for senior managers in the company they work for.

 

Both Alice and Bob have the correct permissions and access mechanisms to perform their duties (which are the same, for their respective managers): each driver has a passcard that will allow them access to the secure garage where the vehicles are stored; they each have authorization to check out keys for the vehicles used to transport the managers. Their passcards do not, however, allow them into other parts of the company property-- they can’t, for instance, use their passcards to enter the Research department, or Accounting. This is an example of least privilege-- they are only given a set of permissions necessary to perform their duties.

 

However, when Alice and Bob arrive at the garage to check out their respective vehicles, they are not given the route and destination of other managers-- only the manager they are driving/protecting that day. Alice cannot see the destination of Bob’s vehicle, and Bob can’t see Alice’s destination. That information is exclusively given only to the people involved in coordinating the movements of the specific senior managers, thus limiting the number of people who might compromise the security of that information. This is an example of need to know-- Bob does not need to know the destination of Alice’s vehicle.

 

To put it in general terms, least privilege usually has to do with clearances and roles, while need to know is typically based on which projects or customers a person is working on/for, and allows for compartmentalization.

 

While I may have done a disservice to the class in not coming up with this analogy earlier, I’m hoping it serves the purpose for anyone else confused about these concepts.

 

 

HIPAA or Giraffe?

            When we (in the INFOSEC community) think of HIPAA, we usually think of the security implications and requirements. That’s our perspective, and what’s important to us, as practitioners. The law, on the other hand, has very little to with security-- most of the security-related content is wedged into the law’s Section 264, which basically tasks the head of the US Health and Human Services Administration to go figure out what protections should be put on medical information for individual patients. When the law is copied from the Web to MSWord, Section 264 comes to about a page of text, while the entire law is 178 pages.

You can find it here:

https://www.govinfo.gov/content/pkg/PLAW-104publ191/html/PLAW-104publ191.htm

 

            The weird thing, from where I sit, is that this law, which is purported to enhance the security of patient data, does pretty much the opposite. The law encourages (just short of a mandate) putting all American medical data into an electronic format, according to a template that the law also tasks the federal government with creating. My question: what is more secure-- paper records or electronic records?

 

            - Paper records can be stolen, modified, or destroyed, assuming an attacker gain get physical access to them. Major or minor disasters, such as fire and flood, could likewise destroy/damage physical records. However, copying these records, or modifying them in a quasi-undetectable way, is a cumbersome, time-consuming process: the attacker would have to capture the data with the use of a device (a camera or photocopier), usually page-by-page, and typically with a light source present. Even stealing paper records is somewhat difficult: paper files are fairly heavy, and quite unwieldy...stealing the records of, say, 1,000 patients (if each record is 100 pages long, which is actually a fairly small patient record), would be impossible for a single attacker, without using a tool like a forklift or handcart, and making several trips between where the records are stored and where the attacker wants to transport them (say, a vehicle).

 

            - Electronic records are easy to steal in bulk: a file or a thousand files or a million files can be moved, erased, copied without much difference in effort (granted, there may be a considerable difference in the time required to copy a million files and a single file, but compared to the time it would take to copy a million hardcopy files, this duration is negligible). Modifying a single file, or a hundred files, or a thousand, through the use of an automated script, in an otherwise-undetectable manner, would be much easier than trying to physically change a paper record. And electronic theft/destruction/modification can be done remotely: the attacker never needs to have physical access to the data in order to harm it. Electronic media (drives, tapes, etc.) are still susceptible to physical disasters like fire and flooding.

 

            With that said, an electronic record can be duplicated easily for archival (the same quality that makes it easy to steal also makes it easy to make backups in order to multiple copies that might be stored in different locations, and thus survive a disaster). An electronic record can be readily encrypted/decrypted by the owner; this would be just about impossible to do with paper records, in any reasonable way. And electronic data store, and each individual file, can be subject to logging and monitoring in a way that is impossible for hardcopy: a piece of paper cannot tell its owner how many eyeballs have seen it.

 

            I’m not really sure the answer to every security issue is “put it on a computer.” Conversely, I’m not a Luddite, either: I don’t think we should stick to archaic modes of data processing and communication just to avoid security issues.

            However, I think this law is a perfect example of how attempting to codify security through a given practice/measure can, instead, harm that very same goal. I don’t think there was much of a market for ransoming patient data before HIPAA, and I don’t think hospitals and doctors had much of an IT security budget before data was converted to electronic form (which, again, is not always the best policy: the 414s hacking crew demonstrated all the way back in the 1980s that medical equipment/services could be harmed remotely). But there are also unintended consequences of efforts such as the HIPAA legislation; one of these is that the cost of medical care in the United States continues to escalate, and the cost of compliance for laws such as this make it harder for new, innovative, small providers to enter the market and compete.

            So was this law useful for patients? Or did it harm them -from both a security perspective and access to healthcare- overall?

            I don’t have much info about it. Glad to hear whatever anyone else has to contribute, in the comments or in private messages.

 

 

 

 

 

 

 

 

 

Stegotrojansaurus

IF YOU ARE STUDYING FOR A CERTIFICATION EXAM, STOP READING-- THIS IS PURELY ACADEMIC AND WILL ONLY CONFUSE YOU

 

 

When I explain steganography to my students, I usually say, “It’s a message in one medium put inside another medium-- more like encoding than cryptography.” I stress that steganography is NOT crypto, even though the topics always seem to be taught coincidentally. I often use the example of Jeremiah Denton, who, as a prisoner of war, blinked the word “torture” in Morse code while being forced to make propaganda films against his country (https://www.youtube.com/watch?v=rufnWLVQcKg). I talk about putting a text message inside the code for a .jpg, and so forth.

 

As almost always happens, a student in a recent class taught me something I did not know before. But this case was exceptional, because it was something that had simply never occurred to me at all, and I don’t think I’ve ever heard anyone else suggest it:

 

Trojan horse applications are a form of steganography.

 

It’s kind of perfect. The malware, which is a message of one medium (the executable), is hidden inside a message of another medium, such as a photo or movie or text document or whatever (sometimes-- there are examples of Trojans where both the malware and its carrier are executables, or there is just one executable with two aspects: one desirable to the victim, and one not).

 

This is purely a philosophical point: it doesn’t mean anything earth-shattering in the world of INFOSEC. But I love it when a student has a completely new take on some fairly old ideas. Blew me away. Good job, Ann-Kathrin.

'Membering Mnemonics

I often learn quite a bit from the students I’m supposed to be teaching (mainly because they’re invariably smarter than I am). I also love mnemonics— those mental tricks and reminders that help you recall concepts and bits of information.

This past week, one of my classes was going over the OSI and TCP/IP networking models; I had a great mnemonic for OSI (Please Do Not Teach Security People Anything)…but I confessed that I have no way of remembering the TCP/IP Model.

One of the participants said she had something, but was reluctant to share. We coaxed it out of her. I share it, slightly modified, with you now:

“Not In The….Arctic.”

I will never again forget the names of the four layers of the TCP/IP networking model. Though I may try.

Thanks, Lauri— you’re a good teacher.

Wandering Security

For the first time ever, I ran across a hotel business center (desktop PC and printer) that had the USB ports physically blocked out. I find that interesting only because I’ve often considered how easy it would be to introduce malware/whatever into a business center (and often hoped those machines are airgapped from the hotel’s production environment).

Of course, this was at a time when I needed to print something off a USB stick, instead of, say, an email I could access through a Web browser.

I found out that unplugging the keyboard would, yes, open a viable USB port that wasn’t limited to just human interface devices. Sure, I was limited to inputs from the mouse in order to manipulate the file (because, well— no keyboard), but it seems that someone put at least some good thought into locking down that system, but then left a giant pathway right through their control policy.

Not sure what the workaround would be, short of putting Super Glue on all the keyboard/monitor USB connections for every PC in every property in that hotel chain. Or going with thin clients that have peripherals that are hardwired and not connected by USB (come to think of it, with a very limited target functionality, why does the business center need full PCs, anyway?).

Anyone ever given any thought to this?