Shared publicly  - 
 
Forward Secure Sealing (FSS) is finally coming to +systemd's journal. FSS allows us to cryptographically "seal" the system logs in regular time intervals, so that if your machine is hacked the attacker cannot alter log history (but can still entirely delete it). It works by generating a key pair of "sealing key" and "verification key". The former stays on the machine whose logs are to be protected and is automatically changed in regular intervals (and the previous one securely deleted), the latter should be written down on a piece of paper or stored on your phone or some other secure location (that means: not on the machine whose logs are to be protected). With the verification key at hand you can verify the journals on the machine and be sure that -- if the verification is successful -- log history until the point where the machine was cracked has not been altered a posteriori.

What is this all good for? Attackers tend to hide all the traces of their break-in on sucess, to make sure the administrator doesn't notice it. Usually this means they carefully edit the log files and remove all traces of log events they themselves generated leaving everything else around, so that the administrator doesn't get suspicious. With FSS enabled this will become much harder: the attacker cannot hide his traces anymore. He can still delete all log files entirely but this is something the administrator would much more likely notice.

Traditionally this problem has been dealt with by having an external secured log server to instantly log to, or even a local line printer directly connected to the log system. But these solutions are more complex to set up, require external infrastructure and have certain scalability problems. With FSS we now have a simple alternative that works without any external infrastructure.

Of course, writing down the verification keys is not too much fun (even though we actually tried hard to make it very short) hence to simplify this we added a little gimmick for you and show a QR code on the terminal so that you can simply scan it off the screen and store it on your phone.

FSS is based on "Forward Secure Pseudo Random Generators" by Bertram Poettering at the Royal Holloway/University of London, who is a cryptography postdoc and researches these kind of things. (He also happens to be my brother...) A paper about FSPRG will be published shortly.

This will soon be available in F18.

Anyway, for now I'll leave you with a screenshot of the key generator tool with the QR code... Neat, right? ;-)

The code is all available in systemd git already.

I'll post a longer blog post with more details and explanations soonishly.

(Oh yeah, the QR code on this screenshot has black stripes on it. It's VTE's fault. It shouldn't negatively impact the readability of the QR code: https://bugzilla.gnome.org/show_bug.cgi?id=435000 )
114
52
Andreas Mack's profile photoEdgaras Lukoševičius's profile photoBennet Yee's profile photoAntonio Varni's profile photo
88 comments
 
In .nl almost every ad seemed to have that QR code. Noticed that lately almost none have them anymore. Fortunately they finally realized that typing in an URL is way less work than finding the QR scanning app and so on.

Easy way to copy that secret verification key seems like a better use.
 
+Olav Vitters well, you can just copy/paste it too, it's shown as well. But honestly, this is for admins, not for non-technical folks, so I think a QR code in this context is quite OK.
 
I can predict the comments: "another Poettering messing with my Linux", joking, nice feature
 
+Andrew Wyatt cool, then what you doing is not helpful, and just noise, and hence waste of time, and hence why do you post this at all? You are contradicting yourself there...
Adam D
+
2
3
2
 
+Andrew Wyatt , I have plenty of time to waste, so you're more than welcome to borrow some and explain. I'd like to hear your objections.
 
+Andrew Wyatt: You forgot you're responding directly to someone on Google Plus? You're kidding I assume?
 
+Andrew Wyatt Cool, I'll translate this for everybody else: +Andrew Wyatt has no clue what he is talking of, but is full of negativity, hence tries to spread FUD, and be as unspecific as possible, making wild claims he makes up of thin air, to just make this negativity stick.
 
+Andrew Wyatt I never claimed that FSS tries to solve the problem of attacks from the inside. That is a fundamentally hard problem, since it is entirely against the protection model of Unix: the admin is in power, and this gives him the power to stay in power. If you want to fix that problem then an external log server won't help you at all. You need to change Unix from the ground up, and not just that as the untrusted admin problem is primarily a social problem that you can only do so much about solving technically.
 
+Andrew Wyatt: If someone has root, they can do what you want. I fail to see how you prevent anything against someone who has root / is trusted.
 
It seems like (without checking the details) this actually can protect against the admin rewriting the logs, if someone else holds the external verification key.

I'm glad someone got around to implementing this.
 
In case anybody wonders, I have blocked +Andrew Wyatt now. If people get personal I am more than happy to remove their comments, and ultimately block them.
 
Well.. I had planned on directing this at +Andrew Wyatt , but if he's blocked I don't know if he'll still see it.  Still think its relevant to his stated concern though.

i agree with external syslog servers being the primary resolution for maintaining secure logs audit trails.  However when talking about system admins as the problem then the question is "who controls the remote system log server" ?  In most organizations I'd imagine the answer would be "the same admin(s)".  So how does that "remove the control of the logs from the administrators of the systems" ?

In an ideal world that would not be the case, I agree.  But in that ideal world about the only way my log server admin doesn't admin something else is if all they are responsible for is the log servers.  Is this likely?

That being said, this doesn't fix it either. As LP said that is a deeper fundamental issue.

Should be interesting to come up with a way to track and maintain the "verification key".  Could be nasty to manage with lots of servers.  I'm curious to see what that goes.
 
I always come back to +Lennart Poettering's posts for more comment deliciousness. Yum... never fails. Anyways, I think it's pretty cool. It's somewhat like git SHA1 being able to verify the whole history. We definitely want immutability for logs.
 
+Martin Pool well, whatever happens, the only thing we can guarantee is that the history has not been altered until the point where the attacker (or admin) got control of the machine. From that point on you cannot trust the machine in any way any more, and that regardless if FSS is used, a line printer is connected or some external log server used. Since log messages are generated by processes and administrators can manipulate them, they can fake whatever they want.
 
+Greg Swift As soon as a machine is under the control of an attacker/rogue admin he can replace the syslogd daemon transparently (even without having that generate and log messages on its own), so that from then on it only generates nonsense (or filters everything interesting out). If this useless data is then centralized on an external log server it is still useless data. Which boils down to the key thing to really understand here: from the moment on where the evildoer got access you cannot trust the machine in any way anymore, not what it does and not what it logs, since the logs are just generated from code it executes.

Oh, and regarding tracking the keys. The idea is that admins just scan these off the screen. We format the keys in URL style so that people could easily write a little Android app that registers itself as URL handler for these keys, parses them and stores them in a little database on the phone. I am hoping that somebody else hacks that up though, as I have zero skills in Android development...
 
+Greg Swift with a QR code..
One can just also print it and stick the sticker of that on the physical box

yes I know, physical security and all, but forensics wise, the more widely known, the harder it is to fake.
 
+Lennart Poettering well yes, once you are compromised all bets are off.  I was mainly just addressing the who controls what scenario.

Personally I'm not inclined to go the app route for racking.  I'd likely lean towards populating the key back into my management system (something like cobbler).  Might even be handy to have that doable by the install system remotely so that its done immediately during kickstart?  I dunno.. interesting potential.

+Arjan van de Ven The sticker on the box is an interesting concept.  Considering I've never seen most of the servers I manage it wouldn't be the route I'd take, but it is a very interesting idea.  Don't have to worry about a phone being dropped, or coffee being spilled on your stack of papers (well.. hopefully you dont let drinks in your dc...)
 
+Greg Swift yes, it is optional. It is both compile-time optional and runtime optional. In fact, it defaults to off and will be enabled only if the admin runs "journalctl --setup-keys", to generate a key pair.
 
thats what I figured, but wanted to not assume.  thanks
 
qr code cool. i wish i had it for other passwords too integrated in psssword safe app ...
 
The case I was thinking of is when there are log events prior to the compromise that provide useful evidence. The attacker could still delete them, but at least that deletion would be obvious, if this works the way I think it does.
 
+Christoph Anton Mitterer IIUC, the write key gets mutated in non-reversible way as things get written, so while the bad guy can generate whatever log [s]he wants after, say, gaining root, it isn't cryptographically possible to rewind the signing key to modify the log which happened before the break-in.

I don't know about the specifics at all but I guess it would be possible to delete the existing log completely and generate new ones mimicking the old one w/ specific modifications at purely algorithmic level, but I suppose there are protections against that too - e.g. key mutation could reflect bytes encoded in such way that the verification can notice the hole in byte timeline or maybe the write key is mutated according to wall time.
 
If you can't prevent deletion, encryption (except protecting sensitive information that you believe they couldn't hack otherwise) is rather useless;  your system is compromised... if they are going to bother taking the time to try and cover their tracks, who's to say the can't just make an entirely false replacement file for a "sealed" log file.  Sorry to flush your dreams, but this seems like an incomplete deterrent, or just completely useless depending how far a hacker can go.
 
+Fábio Bertinatto well, phoronix is very confused, that article and its title are grossly misleading.

+Christoph Anton Mitterer we do not have signatures here in the accepted definition, hence we call this sealing. As mentioned in the posting above the sealing key is regularly and automatically updated based on wallclock time. This is done in a way (as +Tejun Heo already explained) that the next key may be calculated from the previous one, but not the other way round. Every time the next key is calculated the old key is securely erased from the machine so that the attacker has no chance to ever recover it. With the verification key however the key for a specific time range can be calculated directly. Which hence allows us to verify data quickly, but be sure that nobody can pass fake journal files to the verifier which carry valid seals for time ranges already in past.

And no, there is no need to copy the keys from the machine. The verification key is capable of calculating any sealing key directly.

+Olivier Crête because it requires infrastructure, and is easy to eavesdrop/manipulate by an attacker, and doesn't scale as nicely.

+Eli Sand there is no encryption involved anywhere, where did you read that? This is about authenticating logs, not making them unreadable.

+Christopher Overton not sure I follow but note that journal verification with the verification key to be useful should happen offline, on a machine you trust.
 
Oh, the attached bug is only 5 years young... Sadly this is the fate of many bug reports.
 
+Lennart Poettering +1 for adding an audit trail feature to journald! Good job. ... So, systemd is becoming kind of a family business now?
 
Immutability for logs! Nice. Now about that Android app....
 
Be sure to remember to post a link to that paper, I'd like to read it!
 
+James Rhodes Verification of the files from the machine itself is not useful, since the verification executable might have been patched by the attacker. Hence doing this verification at boot is not helpful. It should happen offline.

+Christopher Overton There is no "key encryption". This is not about  encryption at all. Not sure where you are taking this from. The verification key is used to authenticate the journal itself. The key pair is usually generated on the machine itself.
 
+Lennart Poettering I think I see where Christopher is coming from, although the phrasing is a bit odd. I'm coming at this from the mental model of this being somewhat like S/Key, but with the timestamp as an additional input to each iteration, calculated lazily instead of up front, and using the output of each iteration as a sealing key with the 'root' as the verification key. (I'm ignoring what is required for the output of each iteration to be usable for 'seal' and 'verify' operations). From that, the 'sequential hashing of a key alongside the messages' makes more sense.

Mm, no, on re-reading he has some other stuff mixed in with it that confuses the issue a bit. Specifically the 'patterned morphology' bit scans rather strangely, but kind of makes sense if I read it as talking about a terminal-scraping piece of malware. However, since the code should be generated when the machine is provisioned for obvious reasons, it should be in a known-good state.
 
Hmmm... Thinking on it further, I find myself really curious about the specifics of the system. Extending my S/Key based description above, it might even be possible to simply use the iteration as the key to a MAC (possibly after key stretching) over each message. Since the 'root' can compute each iteration (assuming known timestamps, at least; possibly use timestamps synced to the first log message to use a new key), it can therefore verify every MAC.
 
+Lennart Poettering  So as long as the currently used key is stored in the file on disk all messages signed with this key can be faked? Means an attacker has at most 15min to hide his traces until he can no longer make any changes to the log going unnoticed?
Anyway, really nice system and much awaited feature!
 
+Christopher Overton No, I wouldn't use the log messages as input in determining the key. Rather, something more like what follows:

Given the verification key K_0, apply a key stretching function such as PBKDF2 with the timestamp as the salt. The output is K_1, the first sealing key, and the salt is saved. At regular intervals (perhaps "N messages or M seconds, whichever is less"), repeat the key-stretching process on K_i to give K_i+1, and the old K_i is securely erased. The old salt, however, is kept, since the full series of old salts are necessary for recomputing the sealing keys from the verification key.

During those intervals, use K_i as the key to a MAC computed over each message that arrives before the interval ends.

Note: This is just a quick idea, and I have no formal training in cryptography. What FSS actually does is likely far more elegant and secure than my idea. I am looking forward to seeing the publication, partially because crypto is a hobby of mine and partially because I'd like to see how it holds up under actual cryptanalysis.
 
+Lennart Poettering Cool feature. Will you accept a patch that cuts out all that passive voice in the user-facing output?

(Sorry, pet peeve. Can't help it.)
 
+Michael Gebetsroither correct. The interval is controllable at key setup time. "journalctl --setup-keys --interval=10s" will lower the key change interval to 10s. Of course this will result in CPU wakeup and a bit of IO every 10s then, since we need to change the key even if no log data is written.
 
+Christopher Overton I've learned that there are three problems cryptography can solve well:

Secrecy: "Don't let anyone see this!"
Integrity: "Was this message altered?"
Authentication: "Is this from the source it claims to be from?"

And a whole bunch it cannot solve well, if at all:

Indestructibility: "Don't let anyone delete this!"
Guaranteed delivery: "This needs to get to Point B!"
Humans: "But he said he was tech support, and to tell him my password!"

FSS covers Integrity, but not Encryption or Authentication. Since it can't cover Indestructibility (for this you have SELinux in a running system and hardware protections are possible, but that shouldn't even let anything get to the point FSS is necessary) or Guaranteed Delivery (which is still an unsolved problem altogether), there will always be the possibility of the logs being outright deleted or prevented from reaching a log server. At that point, you have to weigh the costs of mitigating those risk through other means (store the logs on hardware WORM media; have a dedicated logging-only hardwired network) against the costs.

And the worst security hole is almost always humans. That can be limited by splitting it into failure domains (a specific set of admins for the logserver, etc), but the issues are still there.
 
It's just needlessly wordy. And making it less wordy and convoluted is really simple:

You have generated a new key pair. The following local file contains the secret sealing key. Advancing the sealing key will automatically update this file. You should not use this key on multiple hosts.

Please write down the following secret verification key. Store it in a safe location, and do not save it locally on disk.
 
+Lennart Poettering Generally passive voice is considered to be overly wordy and not as clear of a statement. When at University I wrote several papers and I have the bad habit of writing in the passive voice. Just turning all the passive phrases to active ones would usually net me a sizable reduction in word count while making statements clearer and stronger.

For Example:

systemd was written in part by Lennart Poettering

vs

Lennart Poettering wrote part of systemd.

They have the same meaning but the second one is active and is clearer because it emphasizes the subject which is performing the action on the object in the sentence. The person acting is Lennart Poettering and the thing being acted on is systemd. By placing systemd first as the object being acted upon it requires more verbage and doesn't read as well. This is a very simple example so the benefits of active voice aren't as clear here. However in longer passages it becomes more obvious.
 
+Florian Haas While I do share it I'm notoriously guilty of not following it. I generally write in the passive voice whether I'm aware of it or not.
 
+Dave Quigley I know, I do too. I guess anyone with a tech background is notorious for doing that, which for me means lots of editing whenever I do any sort of tech writing. (+Lennart Poettering, sorry for going off-topic here)
 
+Lennart Poettering ah thought about so, np. Doesn't really matter if it's 15min or 10s, though couldn't the writing of the new key be optimized away if the key on disk has not sigend any messages yet, without compromising any security guarantees?
e.g if the key on disk has not yet signed any log messages it is imho not necessary to write a new one. A new key could easily be computed if needed from the last key written and just doing the key schedule a few times. Similar handling as to what is needed on power-on after a long period.
 
+Christopher Overton It's worse than that. Some devices (SSDs are a good example) may transparently remap blocks and delay erasing them.

That may be avoidable by using a hardware module (TPM or similar), however.
 
+Michael Gebetsroither that it is diffcult to delete data properly is well-known, and not specific to FSS. We do our best though to make sure that data is not kept around when we update the key or when the key is removed. For example, we set FS_SECRM_FL and FS_NOCOW_FL on the file, so that file systems which care for the flags securely delete the files and don't keep copies.
 
+Christopher Overton Perhaps I was unclear. The remapping is not visible from userspace; it occurs in the flash translation layer in the drive's firmware.
 
+Christopher Overton The data could potentially be recovered by disassembling the drives and reading the NAND dies directly, but not over the disk bus.
 
+Lennart Poettering oh sorry, seems i was not entirely clear! I spoke about the possible reduction of unnecessary writes of key's to disk on short key retention cycles with low log message volume.
 
+Lennart Poettering this is extremely unsensitive to people who have dumbphones. I, as the TRUE voice of users, demand that you write an SMS daemon for GNU/* (not just Linux of course!) that can send this to my dumbphone. BEST REGARDS.
 
I not sure if I understand the whole idea. but i think i found a hole. Since the machine is compromised, the attacker will gain access over the auth key. Won't the attacker therefore have the ability to alter all the future logs?
 
How does this work together with log rotation? Do you have to keep all the logs in order to verify them? If not, what prevents an attacker from just creating a new history starting with the keys that are on the server when he gets access and then deleting the old logs?
 
I'm curious. Does this prevent the unnoticed deletion of logs past a certain time? I believed it did, but comments from other people started to make me doubt it.
 
+Raven Dark yes, as mentioned a gazillion of times in this thread: after the attacker took over the machine the machine cannot be trusted anymore and all log data generated from then on is useless. What the attacker can't do is changing history though: he cannot generate valid log files for things from before his break-in.

+Julian Brost journal files you have deleted you cannot verify. Isn't that obvious? I don't follow.. The attacker cannot generate the right seals for log events with old timestamps since the key for them is already secure deleted.

+Edward Shaw The attacker can delete logs, but he cannot alter them. The admin can easily see if the logs are gone or have holes in them. And he can verify that the logs he still has are valid.
 
+Lennart Poettering  - you can digitally sign a plain text log file and keep the file readable as plain text so that it can be viewed with programs such as less/more, etc...?

I assume that based on your response then, you're not actually signing the file itself, but storing the signature in a separate file... because if you sign a file and store the signature as part of the file, the file is now binary and not text and would require a program to check the signature and display only the text portion...

The reason I mentioned encryption is that in typical implementations, a digital signature on a file usually means/results in the file being encrypted with the signature.

In any case, I still don't see the huge benefit to detecting if someone tampered with a log file.  Hackers could compromise the kernel, or hack their way in to your logging facilities to inject messages that way in which your system may (unsure where it hooks in) be completely oblivious and sign a false message thinking it's legit.

As it's always been - if you want to try and secure your log files, have them log remotely to a very secure system (because if you want to know how you got hacked, your logs are likely your #1 source of info... and if you can't trust those, you're likely screwed).
 
+Lennart Poettering Yes, that's obvious. I'd like to know what exactly prevents an attacker from using the key that's currently on the system to sign forged logs for the last few days.

By the way what's the lifetime of these keys? Could it be possible to change the logs of the break-in before they're signed?
 
+Eli Sand +Christopher Overton You are operating under the misconception that this is a plain text file. It is not - journald uses a record-based file format, so the signatures can easily be inline.

One of the objections people made to journald in the first place was that it needs tools to read regardless.
 
+Eli Sand the journal files are binary and indexed anyway. And no, you don't "encrypt with a signature". That makes no sense. We neither employ signing nor encryption in the Journal, but something we call Sealing. Did you actually read the actual Plus story above?

+Julian Brost there's exactly one valid key for each point in time. As time progresses the old keys are forgotten. That means you cannot fake anything later on since you can't know the right key to use for that specific point in time.
 
+Lennart Poettering But I'm wondering how you can verify which is the right key for which point in time. So how can you say that key1 belongs to timestamp 1234 or can you only say that key2 comes right after key1 and so on?
 
+Lennart Poettering Are there any plans to support multiple verification keys? That might be one way to resolve the "untrusted administrators" scenario some commenters have brought up. Admin 1 has key A, admin 2 has key B, and while you may not trust either, unless they collude neither can tamper undetected.

To clarify, this is the brute-force "run the whole sealing system in parallel for each key" method.
 
Okay, this is pretty neat, and a good example of the kind of value real-world sysadmins might gain from a new logging system, something I can safely say that we are skeptical about as a class. Cool.
 
Hi Lennart - have you considered breaking out the FSPRG code and shipping it as its own library?
 
+Kent Yoder nope. I don't like adding more libs to the stack if we don't have strong reasons to make something a lib. That said there is a git repo on github where i split the fsprg stuff out. Not sure if i am going to maintain that though.
 
If intruder can change log, intruder can change verification software. In any case, you should transfer logs to antoher host to verify it. I.e. you must at least alraedy know, that someone breaks your box.

Where can I see full formal specification of this integrity scheme?
 
The font rendering bug you linked to appears to be fixed as of today.
 
Is there any reason the sealing needs to be based only on time intervals?  It might be useful to be able to seal synchronously just on certain messages ("pam_unix(sshd:session): session opened for user...").
 
if you hack the system within a 'lifetime' of one sealing key, you can still cover your tracks. after all, you have the sealing key at that system, and shouldn't have much problems rewriting the logs and re-signing them. you can also install a tool that would allow you to capture new signing keys when they are produced.

or do i understand this all incorrectly?

individually signed log entries + realtime shipping them to external log server is still the way to go imho.

the other solution would be to encrypt the logs with server key, and only user's other key would be able to decode them into readable form. but that would cause lots of usability issues.
 
so rather than keeping it nice and simple (and lots of programs doing their job) to reduce the possible attack vectors, one big monolithic process THAT now opens sockets? 
where has a monolithic, networking process that controls startup been seen before? 
svchost.exe  ...never been any issues with that.
Features are great, but as one application? 

I wonder what +Linus Torvalds  views on the ever expanding monolithic init system are.


for comparison. 

OpenRC (0.9.3): sysvinit + 300 files, ~30k lines, 3.3k posix sh, ~12k C
Upstart (1.5): 285 files, ~185k lines, ~97k C
Debian: sysvinit + 120 files, 5.8k lines
systemd (v44+): dbus + glib + 900 files, 224k lines, 125k C
sysvinit: 560kB, 75 files, ~15k lines
D-Bus: 11MB, ~500 files. 300k lines, 120k C
glib: 72MB, ~2500 files, ~1.7M lines, ~430k C
 
+Jon Roadley-Battin in true Unix fashion, systemd is built from a number of independent but cooperating daemons. There's PID 1, there's journald, and there are a number of other processes. I am not sure where you are taking "one big monolithic process" from, that's completely made up. You are fudding. Please go away.
 
ironically awaiting the first exploit for the overly complex systemd.

you can already protect your logs by sending them to a remote host.
 
I like the idea, but I think there is still a weakness compared to the described traditional way of shipping the logs (please correct me if I'm wrong):
Once the attacker gains control of the machine, he has 7.5 minutes average to gain control of the journald process (15min rekey interval) and change any log lines he doesn't like. He will retain the current sealing key and the keys generated afterwards will also be ok. For the forensic analyst it will not be clear from what time on the logs cannot be trusted.
In the traditional way, the 7.5 minute window does not exist, any log lines that would indicate an intrusion will have left the control sphere of the attacker. If the attack is sufficiently automated, the likelyhood of covering the traces successfully increases a lot.
Do you think this is a valid concern? If yes, is there a way to overcome this?
 
Attacker will try to cover newest entries anyway, so default 15min epoch time for "The journal FSS" is way too long... If an attacker hacks a machine in the beginning of the epoch It's enough time to hack someone and wipe 15min of logs.

In order to protect against well funded, expert attackers, a FI audit log system need to be able to quickly change epochs. The design  options include changing epochs frequently, say every 100 mS in a  deterministic fashion changing epochs after every N log entries (an  important special case being N = 1) or categorizing the audit log  entries by severity and changing epochs immediately after logging  entries of a certain level or higher. -- Bellare & Yee, 1997
 
+Edgaras Lukoševičius +Lennart Poettering 

The example of 100mS was chosen as an example of a value where we thought an attacker's steps would necessarily cross epoch boundaries (on 1997 era machines).  The "right" value depends on the threat model -- do we expect state-level actors? script-kiddies manually following a written script? or one who is using a fully automated attack tool written by a single or small group of talented hackers, knowing that forward-secure logging is being used and that speed is of essence?  For script-kiddies -- or people accidentally stumbling onto a security hole (e.g., configuration error) -- a few minutes might be good enough.  For fully automated attack tools, 100mS is probably too long these days, though it depends on network proximity, etc (if the attack is interactive and requires several network roundtrips, or trigger activities that definitely hits the disk [start processes from files that are not in the buffer cache, etc], so that the steps of the attack has a lower bound for their duration).  For state level actors?  They are extremely well funded, and I wouldn't want to speculate here.

BTW, fast epoch change does not necessitate writing out log entries for the epoch changes, since these messages will have a predicable value and can be omitted.
 
"With FSS we now have a simple alternative that works without any external infrastructure."  - This is bad security advice.
Add a comment...