Profile cover photo
Profile photo
Gary Marriott
69 followers
69 followers
About
Gary's posts

Post has attachment
This from Bruce Schneier, where I am forced to question his mention of the OPM hack.
"Bruce,
Are you absolutely sure the OPM (BTW is Personnel not personal) data breach was not also an integrity and availability hack?

Since the OPM's records were the key arbiter of personnel security validity, how do we know that the persons who had unauthorised access were not also able to alter those same records, damaging their integrity.

And of course if you are forced to assume that then you have to re-assert the truth of every record they have, which is a real Denial Of Security Vetting attack."

Post has attachment
Interesting:
https://www.youtube.com/watch?v=wnTGO6OFgCo

The head of the NSA/CSC supports strong encryption and says “So spending time arguing about ‘hey, encryption is bad and we ought to do away with it’ … that’s a waste of time to me,”.

But although this may appear as a reversal of his rior opinion, he is still pushing for technology companies to change their business models to not offer end-2-end encryption.

Post has attachment
From this: https://www.schneier.com/blog/archives/2016/01/uk_government_p.html?nc=8#comment-6715755

Although you judge harshly it is a fact that the supposed bastions of democracy the US & UK differ widely in their democratic construction. Specifically the UK has not had a bill of rights or a constitution to guard citizens rights until the recent adoption of the European bill of human rights.

Because of this, it was possible for the UK government to institute legally binding bills going back almost a century that require communications providers to include the capability of third party interception in equipment. Thus the reason why this mainstream VOIP solution has key escrow.

It is only the inexorable onward march of moores law and the weakening of warrant laws that has made the use of this technology for more than specifically targeted interception (bulk collection) possible.

Which is where the communications operators can and sometimes do take a small stand in sticking to the letter of the law in protecting their customers privacy from injudicial exposure.

Therefore, while there is still a need for targeted interception then perhaps with the current and future technology there is a need for open oversight external to the authorities. What that looks like, I don't know.

Post has shared content
See: http://yro.slashdot.org/story/15/11/18/2136224/carnegie-mellon-denies-fbi-paid-for-tor-breaking-research

OK lets accept for not that CMU did not receive payment for their data and that they only gave up their data upon subpoena, it really was just icing to the real issue. That of the un-ethical disclosure of peoples private data resulting in an indirect FBI evidential fishing exercise, which is allowed in discovery unless the evidential collection is prompted (hence the $1) which would render it 'fruit of the poisoned three' and why there is perhaps so much emphasis being placed upon payment.

Remember this, any entity involved in security research or even just a business can be subpoenaed for their data and required by law to not disclose the fact of the request. Further, resisting such requests can lead to extended legal difficulties; just ask Ladar Levison ( https://en.wikipedia.org/wiki/Lavabit ).

So what CMU did wrong here (if current evidence is correct) was to collect and keep significant personal information as a result of their 'Research', which is incompatible with what security research is about. If there had been an Ethical Review Board of the ongoing CMU research this should have been noticed and changes made.

Thus, what could CMU have done.

* They could have set up an internal Review Board to review the ethical, legal and other issues of such research {they admit they did not}
*They could have designed the data collection part of their exploit to anonymize data such that connection inferences can be made without disclosing actual IP addresses ( simply make a salted hash of each IP address ) {they did not}.
* They could have limited collection to just what was needed to prove the exploit and then shut it down {they did not}, instead they ran it for over 3 months.
* Upon proving the method they could have immediately followed responsible disclosure and briefed TOR group {they did not}
* If the research was launched initially by an FBI request or similar, they should have taken legal advice and realised that they could not do this ethically or follow the above and thus NOT agreed to do it {Clearly if so, they failed}

So in closing take note, in the current legal and criminal climate DON'T collect and store unnecessary information unless you can prove that you can protect it from disclosure in untargeted extralegal ways, lest you and your establishment end up be in hot water ( see Sony, Ashley Madison, CMU, NSA etc etc)

Post has attachment
See: http://yro.slashdot.org/story/15/11/18/2136224/carnegie-mellon-denies-fbi-paid-for-tor-breaking-research

OK lets accept for not that CMU did not receive payment for their data and that they only gave up their data upon subpoena, it really was just icing to the real issue. That of the un-ethical disclosure of peoples private data resulting in an indirect FBI evidential fishing exercise, which is allowed in discovery unless the evidential collection is prompted (hence the $1) which would render it 'fruit of the poisoned three' and why there is perhaps so much emphasis being placed upon payment.

Remember this, any entity involved in security research or even just a business can be subpoenaed for their data and required by law to not disclose the fact of the request. Further, resisting such requests can lead to extended legal difficulties; just ask Ladar Levison ( https://en.wikipedia.org/wiki/Lavabit ).

So what CMU did wrong here (if current evidence is correct) was to collect and keep significant personal information as a result of their 'Research', which is incompatible with what security research is about. If there had been an Ethical Review Board of the ongoing CMU research this should have been noticed and changes made.

Thus, what could CMU have done.

* They could have set up an internal Review Board to review the ethical, legal and other issues of such research {they admit they did not}
*They could have designed the data collection part of their exploit to anonymize data such that connection inferences can be made without disclosing actual IP addresses ( simply make a salted hash of each IP address ) {they did not}.
* They could have limited collection to just what was needed to prove the exploit and then shut it down {they did not}, instead they ran it for over 3 months.
* Upon proving the method they could have immediately followed responsible disclosure and briefed TOR group {they did not}
* If the research was launched initially by an FBI request or similar, they should have taken legal advice and realised that they could not do this ethically or follow the above and thus NOT agreed to do it {Clearly if so, they failed}

So in closing take note, in the current legal and criminal climate DON'T collect and store unnecessary information unless you can prove that you can protect it from disclosure in untargeted extralegal ways, lest you and your establishment end up be in hot water ( see Sony, Ashley Madison, CMU, NSA etc etc)

Post has attachment
Gary Marriott commented on a post on Blogger.
This is a good write up but its outcome depends very much on the assumptions made. Yes Scrypt can be weaker than Bcrypt (if you use specific chosen parameters). Also where parallelizable is stated in step two, that is a function of using the P parameter above 1 (which I do not recommend) .

Finally, a much better way to use scrypt for password hashing is to use it AS the pluggable function in a modified PBKDF2 such that the outer (non-parallelizable) loop is run for a specific time for the give hardware e.g. 1 second. Then the iteration count is passed forward on the front of the output hash. AND With the inner scrypt set with P=1 and the other parameters set to be well above the maximum size of available local memory for ASIC, FPGA, GPU or CPU L1/L2 caches.

Post has attachment
A nice summary of why the FBI etx wants back-doors and why it will not help them from Bruce Schneier:
https://www.schneier.com/blog/archives/2015/07/back_doors_wont.html

To summarise his summary, to stop a determined adversary from 'going dark' it is not enough to just put back-doors in security products under your control; you need to do this to ALL SECURITY PRODUCTS EVERYWHERE! or prevent their use by EVERYONE!

At which point any nation doing this would be come less free than North Korea.

Seems the proposal is akin to cracking shelling peanuts with a thermonuclear device.

Post has shared content
I have to say, even if the US and UK both make sweeping requirements that all commercial encryption products have LEO back doors. AND that all their allies follow suit. It will still be a futile effort for the following reasons:-

a) Open source software cannot be controlled in this way by any one or group of entities (ideas cannot be un-invented).

b) An encrypted communication (excluding headers) is by definition indistinguishable from random noise. Thus a double encrypted communication looks just the same as a single encrypted one. So a mandated encryption scheme can be used to wrap an open source secure one.

c) Evil doers have to be assumed to be just a smart as the people chasing them, so will utilise a) and b) to look legitimate.

d) Everyone else, not doing a) & b) is vulnerable to the same Evil doers gaining access to the LEO secured keys.

In the end, this idea is not only foolhardy but possibly exactly what the bad guys want.

I have to say, even if the US and UK both make sweeping requirements that all commercial encryption products have LEO back doors. AND that all their allies follow suit. It will still be a futile effort for the following reasons:-

a) Open source software cannot be controlled in this way by any one or group of entities (ideas cannot be un-invented).

b) An encrypted communication (excluding headers) is by definition indistinguishable from random noise. Thus a double encrypted communication looks just the same as a single encrypted one. So a mandated encryption scheme can be used to wrap an open source secure one.

c) Evil doers have to be assumed to be just a smart as the people chasing them, so will utilise a) and b) to look legitimate.

d) Everyone else, not doing a) & b) is vulnerable to the same Evil doers gaining access to the LEO secured keys.

In the end, this idea is not only foolhardy but possibly exactly what the bad guys want.

Post has shared content
Its almost magic, in the sense that SQRL's technology is sufficiently advanced to appear so. And yet it is anonymous, secure and simple.
Hey everyone! Something cool to share:
Yesterday (Tuesday) during our weekly Security Now! podcast, I used a working beta iOS SQRL client on an iPhone (supporting the nearly finished SQRL secure identity authentication system), to log onto Leo's computer 452 miles away!
Here's the 4-minute segment showing how it went. Check it out! More coming soon, Thanks!!
Wait while more posts are being loaded