markisus 6 hours ago

The article contains a reference to a much more impressive task where a user automatically decompiled a binary exe game into Python. But I read their original post and here is what that user said.

> Several critics seemed to assume I claimed Claude had "decompiled" the executable in the traditional sense. In reality, as I described in our conversation, it analyzed visible strings and inferred functionality - which is still impressive but different from true decompilation.

So I’m not sure that the implications are as big as the article author is claiming. It seems Claude is good at de-minifying JavaScript but that is a long way away from decompiling highly optimized binary code.

  • haolez 6 hours ago

    But it should be easy to generate such data to train an AI to do that, if someone wants, no?

    • soulofmischief 5 hours ago

      Some transformations irrecoverably lose information. A recontextualization engine such as an LLM might be able to "recover" some information by comparing it to other code in its training set, but it's still a guess and not all code will have representation in the training set.

      • gchamonlive 3 hours ago

        With information loss what you previously had as a single solution becomes a generalisation which can be decompiled into many different equally valid solutions.

        That means even though information is irrecoverably transformed during compilation, LLM could come up with semantically valid expanded solution that both fits the compiled information, that is it can be recompiled successfully, and makes sense semantically for whoever would eventually want to evolve the decompiled code with an understanding of the domain.

        So the goal isn't to recover exactly what code really was, but to provide a possible solution that fits and is useful.

      • Kye 5 hours ago

        I wonder if this would be an ideal use case for diffusion-based LLMs slowly piecing it together with each pass from what it could determine from the last.

        • soulofmischief 4 hours ago

          I've experimented with building diffusion-based LLMs but didn't scale them up enough to draw any worthwhile conclusions about their strength vs transformer-based LLMs. In this case however, theoretical ability to unminify a piece of code mainly comes down to the training set and amount of relevant context baked into the system, not the model architecture itself.

      • Retr0id 5 hours ago

        This is exactly what skilled human reverse engineers do. Almost no code is truly novel, at best it's a novel recombination of existing ideas. So yes, some information is lost, but all information relevant to the execution of the program can certainly be recovered.

        • soulofmischief 5 hours ago

          That's simply untrue, since you don't know if I named a minified variable "x" as "foo", "bar", "mungus" or "chungus". You can guess that it might be mungus based on context, but it could very well be chungus. Recovering lost information is an example of extrapolation, and it is inherently probabilistic.

          > all information relevant to the execution of the program can certainly be recovered.

          This is moving goalposts as it has little to do with my original premise.

          • Asraelite 4 hours ago

            It doesn't really matter if it needs to extrapolate. Whether or not you called the variable "mungus", a sufficiently powerful LLM could infer that this is the best possible name for that variable to make it understandable for humans. Doing this across the entire codebase, it could create source code that is actually more readable than the original.

            Of course we're a long way off from this, but there's no reason it couldn't be done in theory.

            • soulofmischief 2 hours ago

              Please just look at my other comments because I find myself repeating myself over and over, my comment went past a lot of people's heads and they rushed towards defensiveness instead of understanding the simple, unarguable fact of life I laid out in my comment.

              • Asraelite 2 hours ago

                Sorry, I didn't mean to come across as disagreeing with you, I just wanted to point out what I thought was an interesting fact. Yes, the original source code is unrecoverable.

                • soulofmischief 2 hours ago

                  I tend to be username-blind and thought you were the person my comment replied to. Reading your comment on its own, I have no issue with it and I apologize for mistaking you for them!

          • Retr0id 2 hours ago

            None of that matters. Nobody cares if a variable is named "mungus" or "chungus", as long as the code is idiomatic and functions equivalently.

            1:1 Source recovery is never the goal of decompilation, or reverse engineering in general. You just want functionally equivalent, idiomatic code.

            • soulofmischief 2 hours ago

              Do you understand what information is and why my original comment about lossy transformations is entirely correct? It doesn't matter if you don't care about the information. It is still lost.

              > 1:1 Source recovery is never the goal of decompilation

              I am aware, but I appreciate the lesson. Now try to consider why that has no bearing on the points I've made.

              • Retr0id 2 hours ago

                The points you've made are "correct" but irrelevant.

                • soulofmischief 2 hours ago

                  Irrelevant to whom? Did I miss the meeting?

          • willy_k 3 hours ago

            > This is moving goalposts as it has little to do with my original premise.

            Your original comment was (replying to one) about “decompilation” of binaries. You seem to be talking about some fantasy perfect decompilation that gets variable names too, but no such decompilation tool or human-employable method does this, and I would argue that variable names aren’t information so much as they’re implicit context - proper code implies certain variable names (stylistic differences notwithstanding) and vice versa.

            • soulofmischief 2 hours ago

              My original comment, in full text:

              > Some transformations irrecoverably lose information. A recontextualization engine such as an LLM might be able to "recover" some information by comparing it to other code in its training set, but it's still a guess and not all code will have representation in the training set.

              It is a completely generalized statement about lossy transformations, aka transformations that aren't reversible with a 1 to 1 map. It says nothing in particular about decompilation, or getting variable names.

              > I would argue that variable names aren’t information so much as they’re implicit context

              You are free to argue what you want, but that doesn't change reality--variable names are information about the original state before it was irreversibly transformed. Is the information important to you and your current task? Who knows. But my original comment is absolutely correct.

      • brookst 5 hours ago

        LLMs can synthesize info not in the training set. They should be just as capable of looking at a binary where info has been lost and recreating source just like a human would. Stuff like variable names won’t be exact (for either human or LLM) but can be reasonable inferences based on usage.

        It’s a guess, sure, but I don’t see why it would be a less good guess than a human’s.

        • soulofmischief 4 hours ago

          I agree, I am aware that LLMs can synthesize data, I am a huge advocate of them for this purpose and use them daily for such. This is where I derive the authority in making such a statement about them. I never made any statement relating the performance of LLMs to humans.

          > Stuff like variable names won’t be exact

          Yes, this is the point of my comment around information loss. Lossy transformations inherently cannot be recovered with absolute certainty, and the degree of certainty depends on available context.

          This is true for absolutely any system which produces lossy transformations, not just in the context of code and LLMs, but encoders, etc.

          • brookst 2 hours ago

            Sure, but for the purposes of decompilation, does it matter if a variable name is exact versus merely being a good description of its function? The issue isn’t an idealized “let’s get the source including meaningless whitespace”, it’s “let’s construct source that can be used interchangeably with the original”.

            For that purpose there is no reason LLMs can’t be as good as the best humans.

            • nyrikki an hour ago

              > “let’s construct source that can be used interchangeably with the original”.

              You lose a lot more than whitespace, you lose semantics, metadata, and lots of other information that makes it so

              Obviously byte coded languages are a bit easier, and you may get lucky and experts help a lot, and perhaps LLMS can help a little.

              Both Rice's theorem and the system identification problem from the cybernetics days relate to why it is so hard.

              > Given a system in the form of a black box (BB) allowing finite input-output interactions, deduce a complete specification of the system’s machine table (i.e., algorithm or internal dynamics).

              AND

              > Given a complete specification of a machine table (i.e., algorithm or internal dynamics), recognize any BB having that description.

              Decompilation doesn't give you the equivalent of the original source code, it gives you new source code that appears to be functionally similar to it, and many people who has been forced to use decompilation to recover from lost source code or consultant time-bombs have run into the problems that admittently seem pretty counter intuitive.

              Remember that Rice-Shapiro and Kreisel-Lacombe-Shoenfield-Tseitin extend Rice to partial and total functions in finite time.

              Telling if a program is equivalent to a fixed other program, even for total functions is still undecidable as they are 'non-trivial' properties.

              Most of the time you can make it work, but it isn't a case of:

              > “let’s construct source that can be used interchangeably with the original”

            • soulofmischief 2 hours ago

              No, it doesn't matter, and as I explained above, I never mentioned humans once in my comment, nor did I make any comparisons. I simply described a mathematical fact that a lot of people are having trouble understanding and digesting.

              I do appreciate your comment as it's not defensive, it's made in earnest, so please don't take my above statement to be a reflection of you in particular.

          • smallmancontrov 4 hours ago

            Sure, but the ability to synthesize data challenges the implicit assumption that all reconstruction error is bad. Who is to say the LLM won't come up with better variable names than the original source? It probably won't, but I could see it often hitting roughly equal quality. The reconstruction error metric would complain but a human would not, so the problem is now with reconstruction error as a metric.

            • soulofmischief 2 hours ago

              > Who is to say the LLM won't come up with better variable names than the original source?

              This is still a loss of information. You're completely misunderstanding the point: Minification is an irreversible process unless you happen to have exactly the context you need, and can verify the provenance of the source in question. Coming up with "better" variable names is not recovering information, it's extrapolating information about the old state to the point of error.

    • kelsey98765431 6 hours ago

      Replace the word easy there with the phrase "technically may be possible with great effort and expense, probably"

    • kees99 5 hours ago

      Decent optimizing compiler would necessarily loose a fair bit of source information - loop unrolling, function inlining, tail-call optimizations, etc...

      There is no good way to reconstruct that, AI/ML or not.

      • brookst 5 hours ago

        For loop unrolling at least, wouldn’t most human reverse engineers see the original loop? It’s far more likely that the source had a loop than that some high level language programmer did the unrolling in source.

  • casey2 2 hours ago

    I don't see why these claims are even being made. It's well known that a transformer could in theory (in practice implementation details prevent this) translate given all the data in, say, a nes cartridge translate it into a git repo on a linux machine that compiles for x86, or even just an elf x86-64 binary directly.

    Is the claim a general system exists? I'm extremely doubtful of that claim, but one that could do every published nes game to some current linux enviornment? Would definitely be easier than making something like current Claude.

  • ninetyninenine 4 hours ago

    [flagged]

    • markisus 2 hours ago

      It's a little hard to understand your criticism. From just reading the original article, I had assumed that Claude could decompile binaries. The author said

      > Understand dear reader that this technique can be done on any programming language and even from pre-existing binaries themselves.

      Following that sentence, the author included a twitter embed pointing to a reddit thread about decompiling a binary. Only after I went to the reddit thread, I found that there was no decompilation involved.

jameshart 2 hours ago

This feels very much like the work of someone with ‘just enough knowledge to be dangerous’.

At no point in this process does the author seem to stop and inspect the results to see if they actually amount to what he’s asking for. Claiming that this output represents a decompilation of the obfuscated target seems to require at least demonstrating that the resulting code produces an artifact that does the same thing.

Further, the claim that “Using the above technique you can clean-room any software in existence in hours or less.” is horrifyingly naive. This would in no way be considered a ‘clean room’ implementation of the supplied artifact. It’s explicitly a derived work based on detailed study of the published, copyrighted artifact.

Please step away from the LLM before you hurt someone.

  • Snuggly73 2 hours ago

    “Using the above technique you can clean-room any software in existence in hours or less.”

    Having spent my misguided youth doing horrible things to Sentinel Rainbow and its cousins - I can only chuckle.

    • dogma1138 2 hours ago

      For those who don’t know Sentinel Rainbow is a DRM dongle these were popular in the 90’s through early 2000’s for enterprise/business software especially that could be useful on the smaller end of the businesses scale (CRM, ERP, CAD/CAM) where piracy concerns were much bigger due to the relative high cost of development combined with a relatively small market to begin with.

      SAAS pretty much made all of that obsolete, since with SAAS you get an unbeatable DRM for free.

  • 29athrowaway 2 hours ago

    If my understanding is correct, what's legally protected is reproducing a proprietary IP design rather than studying the design?

    If you create a new design that doesn't have the proprietary elements in it that's not grounds for copyright infringement?

    • jazzyjackson an hour ago

      If you studied the source code, you can't say you've independently created the copy, the trick with clean room design is the implementers are going off a specification which does not include copyrighted material.

      https://en.m.wikipedia.org/wiki/Clean-room_design

      But maybe you could use one LLM to study the software and write a specification, then throw that over the wall to a different human who uses an LLM to write software based on that spec

      • spwa4 21 minutes ago

        So you should do the same as with normal clean-room practices?

        LLM1: code -> english description of code

        LLM2: english description of code -> code

        And that would be clean room? Might be cool to automate that. I bet you could train LLMs to do exactly that.

  • futasuki an hour ago

    I checked the author’s result. It is 100 percent BS. All hallucination. Has nothing to do with the original claude code.

viraptor 7 hours ago

I'm not sure why this is framed as an issue for security teams. Transpiling software has been a thing for ages. Especially in the JS world. Decompiling has been a bit harder without automation, but unless you have black box tests, this process will take ages to verify that the result has matching functionality.

So why would the blue teams care beyond "oh fun, a new tool for speeding up malware decompilation"?

Edit: To be clear, I get the new reverse engineering and reimplementation possibilities got much better and simpler. But the alarmist tone seems weird.

  • sbarre 3 hours ago

    I read the blog post (especially in combination with the end bit) at least in part, as an advertisement for the author's capabilities and services.

    That makes the tone make a bit more sense to me.

  • Avicebron 6 hours ago

    It seems like "red-teaming" and "security research" have become more socially prominent recently, so people naturally aligned with grift, e.g. making things seems alarmist and that they are the only ones in the know, are trying to seem part of the club?

  • giancarlostoro 5 hours ago

    Agreed. Are we supposed to stop developing all software as a result? Ask for LLMs to censor reverse engineering? Someone else wont care and will build another LLM to bypass limitations.

  • SebFender 5 hours ago

    On point. We're not really interested in these things. Yeah we take a look at it and stay informed but the main focus remains bypass of controls and data itself.

    With decent backend controls - apps don't/shouldn't do much in the end. Once you show information on a screen consider it potentially gone.

mpalmer 5 hours ago

Three years ago, you wrote

> Systemically, I'm concerned that there is a lack of professional liability, rigorous industry best practices, and validation in the software industry which contributes to why we see Boeings flying themselves into the ground, financial firms losing everyone's data day in and out, and stories floating around our industry publications about people being concerned about the possibility of a remotely exploitable lunar lander on Mars.

> There's a heap of [comical?] tropes in the software industry that are illogical/counterproductive to the advancement of our profession and contribute to why other professions think software developers are a bunch of immature spoiled children that require constant supervision.

3 weeks ago you posted something titled "The future belongs to people who can just do things".

Today you post this:

> Because cli.mjs is close to 5mb - which is way bigger than any LLM context window out here. You're going to need baby sit it for a while and feed it reward tokens of kind words ("your doing good, please continue") and encourage it to keep on going on - even if it gives up. It will time out, lots...

I don't think you are someone who can just "do things" if you think a good way to de-obfuscate 5MB of minified javascript is to pass it to a massive LLM.

Do you think you are advancing your profession?

  • rafram 4 hours ago

    Why do you feel the need to be so rude about an interesting little blog post?

    Obviously you don’t need an LLM to prettify obfuscated JavaScript. But take a look at the repo. It didn’t just add the whitespace back — it restored the original file structure, inferred function and variable names, wrote TypeScript type definitions based on usage, and added (actually decent) comments throughout the source code. That simply isn’t possible without an LLM.

    • helsinki 2 hours ago

      That's the thing - it wasn't even interesting. It was just some LinkedIn garbage post, in my opinion.

    • mpalmer 4 hours ago

      > That simply isn’t possible without an LLM.

      Do you have a lot of experience with minified code?

      • rafram 4 hours ago

        Yes, I do.

        Please link to a tool that can infer function/variable names and TypeScript type definitions from minified JS without using LLMs or requiring significant user input.

        • mpalmer 4 hours ago

          You didn't say it's impossible for a tool to do it. You said it's impossible.

          Also, it's guessing at the names, guessing at type definitions, and it makes further guesses based on its previous ones, correct or no. If you don't already know what you're doing, you're in trouble.

          For someone to claim that they're sharing the "decompiled" source of Claude Code for the public good is self-important back-patting nonsense, let alone misleading.

          • rafram 4 hours ago

            [flagged]

            • throw10920 3 hours ago

              > I get the sense that you’re frustrated that something you’ve invested a lot of time and energy into learning is being automated. Maybe you’re scared because the technology has moved so quickly and your understanding of how to use it hasn’t kept up.

              This is profiling/projection. You're incapable of responding to the GP's points so you're instead emotionally lashing out and attacking them. This is not really suitable for HN.

              > But making someone’s day worse on Hacker News isn’t a good way to deal with that.

              This suggests that you're incapable of distinguishing criticism of someone's work with personal attacks on them (furthered by the profiling that you tried to conduct above). Those things are not the same. If your day is ruined by someone posting reasonable criticism of an article that you personally submitted to HN, a place explicitly designed for intellectual curiosity, your expectations need to be adjusted.

              • rafram 2 hours ago

                I did respond to GP’s points. And I didn’t say “day is ruined.” I’m also not OP.

                • throw10920 2 hours ago

                  > I did respond to GP’s points.

                  And you also profiled and personally attacked them.

                  > I didn’t say “day is ruined.”

                  > making someone’s day worse on Hacker News

                  Now you're continuing to be dishonest. For the purposes of this discussion, those are the same thing.

                  > I’m also not OP.

                  Reading my comment will show that I never said you were nor are any of the points I made predicated on that.

            • mpalmer 3 hours ago

              OP is an adult who published his writing and posted it here himself for feedback. It's not all going to be positive.

              > I get the sense that you’re frustrated that something you’ve invested a lot of time and energy into learning is being automated. Maybe you’re scared because the technology has moved so quickly and your understanding of how to use it hasn’t kept up

              Wrong on all counts. I use LLMs to write code all the time, and I know how they work, which is why I find processing 5MB of JS through one to be an obscene waste of energy.

              I do not use LLMs to publicly claim abilities I don't already have myself. Reading this article does not worry me one bit about my job security.

  • bryanrasmussen 4 hours ago

    I get the feeling that you think they are not advancing their profession and that the quotes you have made of their work are such obvious examples of some problem with them that your parting question is some sort of stinging rebuke - is that correct? If so I have to admit I'm not following the connections.

    • mpalmer 4 hours ago

      It's a rebuke. I don't mind that you're not following the connections.

IshKebab 6 hours ago

Erm sure... so is the output actually any good? I don't think anyone doubted that the LLM could produce some output but I would like to know if it is actually good output. Does it compile? Does it make sense?

  • causal 5 hours ago

    Why does the post avoid this obvious question. Claude is impressive, but it still hallucinates a lot.

    You really need to be able to build + run + verify features + compare compiled outputs; then you can be somewhat confident it really did what the author is claiming.

  • futasuki an hour ago

    No, the ouput is all hallucination. The minified version contains many prompts that can be easily found. None of them appear in the authors result. None of the code structures and identifiers of the minfied version are present. Its all BS.

zeckalpha 3 hours ago

That's not the usual definition of clean room.

If you had it generate tests then handed the tests off to a second agent to implement against...

zahlman 2 hours ago

> Please understand that restrictive software licenses no longer matter because these LLMs can be driven to behave like Bitcoin mixers that bypass licensing and copyright restrictions using the approach detailed in this blog post.

This reads to me like "Please understand that legal protections no longer matter because computers can now break the law for you automatically".

saagarjha 7 hours ago

> You might be wondering why I've dumped a transpilation of the source code of Claude Code onto GitHub and the reason is simple. I'm not letting an autonomous closed source agent run hands free on my infrastructure and neither should you.

Asking it for its source code (AI never lies, right?) and then buying it on your personal card so corporate security doesn’t know what you’re doing makes me feel a lot better about it.

vlovich123 3 hours ago

> Please understand that restrictive software licenses no longer matter because these LLMs can be driven to behave like Bitcoin mixers that bypass licensing and copyright restrictions using the approach detailed in this blog post.

I’m pretty sure translation of a text into another language would still count as copyright infringement. It may be hard to prove, but this isn’t a copyright bypass.

  • Mathnerd314 3 hours ago

    There is a question of originality. If the variable names, comments, etc. are preserved, then yes, it is probably a derivative work. But here, where you are starting from the obfuscated code, there is an argument that the code is solely functional, hence doesn't have copyright protection. It's like how if I take a news article and write a new article with the same facts, there's no copyright protection (witness: news gets re-reported all the time). There is a fine line between "this is just a prompt, not substantial enough to be copyrightable" and "this is a derivative work" which is still being worked out in the legal system.

  • 1970-01-01 3 hours ago

    Only if the translation is then published

thegeomaster 3 hours ago

This is total bullshit. It's clear by spending 2 minutes with the output, located on https://github.com/ghuntley/claude-code-source-code-deobfusc....

The AI has just made educated guesses about the functionality, wrote some sensible-looking code and hallucinated a whole lot.

The provided code on GitHub does not compile, does not work in the slightest, does not include any of the prompts from the original source, does not contain any API URLs and endpoints from the original, and uses Claude 3 Opus! And this is just from a cursory 5-minute look.

  • jasonjmcghee 3 hours ago

    This needs to be the top comment. Many folks are arriving on the post and taking it at face value. I don't know if it's ignorance or an attempt at a publicity stunt on the author's part, but it isn't at all what they claim.

    • Sharlin 2 hours ago

      Based on the author's other blog posts, they certainly seem to have drunk the LLM Kool-Aid. Likely enough of it to make their conclusions perhaps slightly biased.

    • thegeomaster 3 hours ago

      > I don't know if it's ignorance or an attempt at a publicity stunt on the author's part, but it isn't at all what they claim.

      I let the author know on Twitter too: https://x.com/thegeomaster/status/1895869781229912233

      If it's the former, I assume he will update or take down the blog post.

      • jasonjmcghee 2 hours ago

        > but if it’s not a 1:1 then deobfuscate is the wrong word for sure

        "if it's not" is so troubling

aeve890 6 hours ago

People needs LLM to transpile JS now? Unless it can reliable extract semantics I don't see the novelty.

mtrovo 6 hours ago

I don't understand Anthropic's reluctance to release this project as an npm package but not open-source it. Claude Code is such a great example of how agents could work in the future that the whole community could benefit from studying it. Plus, the work on integrating MCPs alone could create a huge network effect opportunity for them, one that's much bigger than keeping the source code secret.

All they've done so far is add an unnecessary step by putting a bounty on who will be the first to extract all the prompts and the agent orchestration layer.

  • Etheryte 6 hours ago

    One obvious reason to not make it open source is licensing, if you do that, all of your competitors can cookie cutter copy what you're doing.

ojr 4 hours ago

I just inherited a Flutter project with no readme and no prior Flutter experience. AI helps but adding new features and deploying is still a tall task, having a conversation with the previous contributors is invaluable and somehow underrated these days

yellow_lead 3 hours ago

> cli.mjs

> This is the meat of the application itself. It is your typical commonjs application which has been compiled from typescript.

Why is it .mjs then?

  • nloomans 3 hours ago

    I think because Claude said so and the author just copied it without checking. You can see it in the first screenshot of Claude's output:

    > After examining the provided code, I've determined that this appears to be a CLI application for Claude code-related functionality, built as a CommonJS TypeScript application that has been compiled with webpack.

    Although, looking at the minified code, it seems to be using module.createRequire for CommonJS compatibility, so maybe it isn't completely wrong: https://nodejs.org/api/module.html#modulecreaterequirefilena...

  • panarky 3 hours ago

    .mjs typically designates newer ECMAScript modules, contrasted with .js for older CommonJS.

bavell 3 hours ago

I appreciate the content and respect the hustle but I'm really not a fan of the author's writing style.

  • zahlman an hour ago

    At least it doesn't come across as AI-generated...

amelius 6 hours ago

> these LLMs are shockily good at transpilation and structure to structure conversions

I wonder if it is possible to transpile all the C Python modules to an api version that has no GIL, this way.

  • Retr0id 5 hours ago

    I'm confident you could get code output that compiles and runs, but not confident that you wouldn't end up with subtle race conditions. Aside from anything else, there's not much nogil C python code in training sets yet.

licnep 6 hours ago

interesting, i never thought about this use case before, but LLMs may be exceedingly good at code deobfuscation and decompilation

iLoveOncall 7 hours ago

This is beyond clickbait, a node application that includes the map files is not even remotely "compiled".

  • notpushkin 5 hours ago

    It doesn't include a sourcemap for the main module, cli.mjs.

api 3 hours ago

It has always been possible to decompile and deobfuscate code. This makes it way, way easier, though it still requires effort. What this produces is not going to be perfect.

The author thinks this invalidates the business models of companies with closed source or mixed open and closed components. This misunderstands why companies license software. They want to be compliant with the license, and they want support from the team that builds the software.

Yes, hustlers can and will fork things just like they always have. There are hustlers that will fork open source software and turn it into proprietary stuff for app stores, for example. That's a thing right now. Or even raise investment money on it (IMHO this is borderline fraud if you aren't adding anything). Yet the majority of them will fail long term because they will not be good at supporting, maintaining, or enhancing the product.

I don't see why this is so apocalyptic. It's also very useful for debugging and for security researchers. It makes it a lot easier to hunt for bugs or back doors in closed software.

The stuff about Grok planning a hit on Elon is funny, but again not apocalyptic. The hard part about carrying out a hit is doing the thing, and someone who has no clue what they're doing is probably going to screw that up. Anyone with firearms and requisite tactical training probably doesn't need much help from an LLM. This is sensationalism.

I've also seen stuff about Grok spitting out how to make meth. So what? You can find guides on making meth -- whole PDF books -- on the clear web, and even more on dark web sites. There are whole forums. There's even subreddits that do not not (wink wink nudge nudge) provide help for people cooking drugs. This too is AI doom sensationalism. You can find designs for atomic bombs too. The hard part about making an a-bomb is getting the materials. The rest could be done by anyone with grad level physics knowledge, a machine shop, and expertise in industrial and electrical engineering. If you don't have the proper facilities you might get some radiation exposure though.

There is one area that does alarm me a little: LLMs spitting out detailed info on chemical and biological weapons manufacture. This is less obvious and less easy to find. Still: if you don't have the requisite practical expertise you will probably kill yourself trying to do it. So it's concerning but not apocalyptic.

DrNosferatu 5 hours ago

Now just integrate with Ghidra!

  • ghuntley 4 hours ago

    Yep. Now you are thinking! :)

meindnoch 7 hours ago

Horribly obnoxious writing style. Is there a name for this? It's like the written equivalent of TikTok trash or MrBeast videos.

  • mpalmer 5 hours ago

    I've noticed this before. Even when they're writing, people like this "think in video". And they're not very capable writers to begin with.

  • mkoubaa 6 hours ago

    Reads like it was written by someone with the attention span of a gnat

  • permanent 6 hours ago

    Engagement-optimized writing style (whether intentionally or subconsciously learnt to do so)

    • a12k 6 hours ago

      Maybe, but I disengaged partway through (right after “I’m not going to bury the lede” and seeing there was a bunch more engagement filler immediately after, burying the lede). I will not read prose written like this.

      • ghuntley 4 hours ago

        then you missed the gold at the bottom....

        • bavell 3 hours ago

          The gold-encrusted pile of poo? Not missing much...

        • a12k 4 hours ago

          If the article was compellingly written I guess I would have gotten to it. But the author is a bad writer and uses tricks reminiscent of “read this list, item #9 will shock you!”

          No thanks.

          I opened another article someone posted by the same author and now that I know they write like this, u couldn’t make it through the first paragraph. Absolute trash.

          • zahlman an hour ago

            (You appear to be replying directly to the author, BTW.)

  • cedws 4 hours ago

    Oh it's not just me then. I felt like I was having a stroke reading this.

  • scottcha 6 hours ago

    His style evokes a bit of Hunter S Thompson for me. I appreciate that it’s a bit different than your standard blog style.

    • mpalmer 5 hours ago

      Sure, if HST limited himself to writing about how impressive it is to do things without knowing how to do them.

      The real Thompson'd be more likely to say this guy has given his mind to a goddam machine and he's not even using his soul.

  • gabrieledarrigo 6 hours ago

    Agreed. The article on the performance reviews is disturbing.

  • hu3 6 hours ago

    It had a harsh start for me (a bit too much) but then I kept reading till the end, which is rare for me

    It also contains some gems previously unknown to me like Claude's binary VB to Python capabilities.

  • yapyap 6 hours ago

    I’d say it’s inspired by how they write on ‘influencer’ side of tech twitter, the part of twitter that is one of the most obnoxious. the ‘AGI is nearly here’ people and crypto bros all hang out there.

gtirloni 6 hours ago

TL;DR; developer asks Claude Code to revert TypeScript minification ("decompile"). Target is Claude Code's own CLI tool.

  • ghuntley 4 hours ago

    TL;DR. Wine (the windows emulator) is about to get more contributors to it than ever before. How can people miss the gold at the bottom of the post. Heh.

    • jazzyjackson 39 minutes ago

      Just what open source projects need, contributers who don't bother to read the code they're contributing.

    • futasuki 2 hours ago

      the author apparently never checked the ouput. it has nothing to do with the original - its all hallucination. any professional can see this in 5 minutes.

yodon 7 hours ago

I found this article [0] by the same author and linked in the post more personally valuable - great insights into expert-level use of Cursor.

[0]https://ghuntley.com/stdlib/

  • gabrieledarrigo 6 hours ago

    A pretty boring article.

    • causal 5 hours ago

      The concept was interesting (build up your own stdlib of Cursor rules) but there's a kind of hyperbolic/click-baity flavor to both articles that undermine them a little, I think they would stand fine on their own if they just cooled the dramatics a bit.