hansonkd 2 days ago

In the soup story the villagers freely gave up their carrots and onions and the travelers didn't give any guarantees that they wouldn't be consumed.

In the AI analogy, it is a bit closer in my mind if the travelers would say "Don't worry your onions and carrots and garnishes won't be consumed by us! Put them in the pot and we will strain them out, they are still yours to keep!"

We, the villagers, are dumping our data into the AI soup with a promise that it won't be used when we are using the API or check a little "private mode" box.

  • jpadkins 2 days ago

    The analogy breaks down because physical property and intellectual property are different. When we input creative works into training sets, we do not withhold those works from someone else! Digital copies are different than scarce resources. *

    Also, all the AI ToS I've read have stated they will use my inputs to improve their services. I haven't seen an AI service state they won't use my inputs.

    * Against Intellectual Property is a good book that explores this idea https://cdn.mises.org/15_2_1.pdf

    • throw10920 a day ago

      The analogy is perfectly apt. When an AI is trained on work that you've produced, it steals your effort - your work and effort and sweat has been taken by the model and its users.

      ...unless you think that your employer should be able to withhold wages from you because there's no "physical property" that you've provided to them.

  • dfltr 2 days ago

    And to top it all off, they're charging us for the soup, and it's getting more expensive every time we give them another ingredient.

  • lovich a day ago

    Literally all “promises” mean nothing unless backed up by force.

    The government was a nice backplane to ensure that, but now that its decisions are unreliable, all interactions with other parties are under these natural law rules.

    I don’t think this being AI really changes the deal given that starting situation

    • dredmorbius a day ago

      A frequent trope, but not universally true.

      Many social conventions are less implemented by force than by withdrawal of cooperation. That's an aggression, but of a very mild formf, but regardless one which is remarkably effective without requiring an offensive stance or the risks concomitant to same.

  • 2099miles a day ago

    Psh, the companies are freely giving up the data. It is unmentioned where the villagers got the carrot initially, maybe they also stole it from the library or promised their users the carrot would not be eaten. Lol

  • RodgerTheGreat 2 days ago

    It would be more accurate to imagine a version of the tale where the stone soup chef rifles through people's houses to collect ingredients without permission (if they were against it surely they would've opted out of his services and obtained guard dogs?), and then opened a stand to sell the soup in the town square at premium prices while tainting the wares of his fellow vendors with his leftover slop.

    • disqard 2 days ago

      Yes! This nuance captures more of today's reality -- esp. the "tainting", which others have also noted (e.g. Emily Bender's "Information Oil Spill")

  • chanux 2 days ago

    In the folk tale, the villagers give stuff they own, willingly. The soup chefs do not go sneakily pick stuff up.

    Oh but the villagers were kind of fooled into giving.

    OK, but it benefits everyone. No mention of soup costing money later.

lsy 2 days ago

Adopting this perspective would improve the quality of efforts around this technology. Instead of thinking of it as somehow creating an "intelligence", seeing it as a complex lens on the training data that is controlled by the prompt helps you understand that the output isn't generated by the model, but by people. And various existing pieces of human effort are brought into focus and collimated by nudging the lens in different directions with a "prompt". The user then gives those pieces meaning and determines whether the result is useful or not.

This makes certain things more clear: notions of "truth" are not in play beyond statistical happenstance, certain efforts to make outputs uniform are more trouble than they're worth, and valuable use cases are strongly correlated with the ability and convenience of the user to confirm the usefulness of the result.

  • kridsdale1 2 days ago

    I see the models as oracular seeing-stones like a wizard might use.

    Ponder the orb! Probe its secrets!

    Holographically, all our text is encoded in there. If you know how to query.

    • eMPee584 2 days ago

      oh so wonderous times ahead may they converge towards peace and prosperity for the whole galaxy

  • xpe 2 days ago

    Using various metaphors carefully and fluidly is key. No one is sufficient. Not this one, nor any other.

    I say: go back to basics. One good foundational point is dispelling confusion and conflation around "intelligence". So many people have woefully narrow and unexamined notions of "intelligence". It wouldn't be unfair to say many people have broken definitions. Broken because they just aren't good enough to make meaningful progress in a modern world where many kinds of agents display many different kinds of intelligence. Such broken definitions are often too specific; too arbitrary; too rooted in binary thinking.

    Many of our current language patterns are liabilities. Not to mention corporate and organizational cultures where hazy definitions slide around and few people will admit that they don't really know what others mean by the term. Sometimes it feels like a big charade where no one wants to hurt anyone's feelings nor appear uninformed. And so it goes, some kind of elaborate mystical ritual where the confused participants lead each other further into madness.

    With this in mind, I find tremendous value in Stuart Russell's definition of intelligence: the ability of an agent to solve some task. An agent is anything that makes a decision: a human, an animal, a system of any kind. This definition intentionally leaves out any notion of (a) humans; (b) consciousness; (c) some arbitrary quality line. This usage cuts through so much bullsh*t. I highly recommend finding a way to shift conversations towards it wherever possible. This isn't easy in my experience. We have so much baggage and crufty thinking, even we're able to put aside our baser instincts.

    One might say that Russell's definition just "kicks the can down the road". I don't think so. It encourages people to define their metrics a bit more clearly -- hopefully out loud or on paper -- for a particular context. It is one step closer to clarifying things. One step in the right direction -- to stop pretending like we all know each other means -- and instead actually pose an answerable question.

    Now, what about "general" intelligence you say? Well, one step at a time. Wait until a group of people have demonstrated some ability to find some kind of consensus on particular tasks. It is hard work to socialize these ideas. Defining general intelligence in meaningful ways is really hard and contentious. It often becomes a lightning rod for all number of other disagreements.

    As one example, look at the shitstorm around various sociological attempts to measure the general aspects of intelligence in humans. Without attempting to summarize it in any detail, there has been a huge dumpster fire involving: poor statistical understanding, shoddy research, tone-deaf communication, willful misinterpretation, accusations of racism, and so on. There are pockets of truth in there, but even trying finding the core nuggets of useful truth something makes everything radioactive, depending on the context. A typical person in modern culture is usually unable to calmly make sense of these issues, and who can blame them? Statistical understanding doesn't grow on trees. The same goes for understanding machine learning theory.

    • TwoPhonesOneKid a day ago

      I would just chuck the idea of general intelligence out the window. It seems to give us nothing anyway.

      • xpe a day ago

        An overreaction and/or exaggeration I think. How hard have you looked at the problem? Without a doubt, there are common aspects of many kinds of intelligence.

        • TwoPhonesOneKid a day ago

          Yes. The more I look at it, the more I see the concept of general intelligence as a nonsensical one. What matters is how good you are at solving a given task. I don't think there's any good signal for general ability to solve tasks.

          • xpe a day ago

            > The more I look at it, the more I see the concept of general intelligence as a nonsensical one.

            "nonsensical"? This isn't the right word, is it?

            The idea of general intelligence is certainly sensical, in the sense that it is a coherent idea that is not inherently self-contradictory.

            The idea of general intelligence is also testable. Run experiments and see how people do across a range of tasks. If you run a set of proper experiments and still cannot find any people that do better across the board, such a result would probably suggest there is no "general" intelligence in humans.

            This is not the case however. Such experiments have been run. In humans, there are definitely people who perform better across the board. They almost certainly have better brains (in some sense, though I'm not ruling out more holistic explanations, such as better energy reserves and better microbial health in their guts. (I'm not saying they are "better" people in any moral sense, to be clear).

            Now, you might say "ok, but their brains require more energy" or "they have a leg-up somehow". Perhaps, but irrelevant to my core point: there is such a thing as generalizable intelligence. (I didn't say perfectly generalizable, of course.)

          • card_zero a day ago

            I could go along with that, but then I'd want a definition of personhood that excludes chatbots, facial recognition systems, cunning squirrels, and other task-solvers. (Does "solve" even fit with "task"? Well, whatever.)

          • xpe a day ago

            > I would just chuck the idea of general intelligence out the window. It seems to give us nothing anyway.

            This was the part that I think is exaggerated / overstated.

    • dredmorbius 2 days ago

      Submitter here.

      I came across the Gopnik piece after hearing her discuss it on a recent episode of the Complexity podcast from the Santa Fe Institute (SFI). The series begins here: <https://www.santafe.edu/culture/podcasts/ep-1-what-is-intell...>.

      As I recall that episode doesn't directly tackle what intelligence is, though numerous others from the Complexity back catalogue do, as does an episode from another podcast in the New Books Network (NBN). Two specific approaches stand out.

      In the NBN episode, a discussion of the Turing Test makes specific and detailed note of how that test side-steps the question of what intelligence is entirely by focusing on what it does, and specifically whether an artificial agent can convince a human interlocutor that it is intelligent, through text-based interactions. I find this particular approach (focusing on outputs and appearances rather than inner states and motivations) generally useful, and not only for artificial behaviours. To a great extent, for example, I find what a person, organisation, or institution does far more accessible and generally useful than why it does that. This isn't to say that ends (results/actions) are more significant than means (causes/motivations/intent), but they are accessible and determinable with far less ambiguity or presumption. Knowing causes or motivations is useful for its predictive value, but given even a small sampling of behaviours and instances, it's generally possible to posit or infer these to a useful degree without deep introspection.

      Another approach, taken in multiple Complexity episodes as well as writings and discussions elsewhere, former SFI president David Krakauer posits that intelligence is search, and specifically search through a pattern space for a solution or approach to some given problem. (See especially "Ingenious: David Krakauer", Nautilus 16 April 2015 <https://nautil.us/ingenious-david-krakauer-235383/>.)

      I've put some thinking into an ontology of technological mechanisms, where one of those is information, consisting generally of input (sensing, parsing), storage/retrieval, output, and logic. Intelligence falls under logic, and I'd argue involves comparisons on current and prior experience (e.g., sensing and storage/retrieval), as well as applying rules, algorithms, inferences, and the like (all forms of logic, broadly). "Intelligence" then is a form of logic where logic is generally processing (as opposed to input/output/storage) of information.

      At what stage a human-like or general intelligence emerges is of course somewhat nebulous. To quote a long-standing US National Parks Service observation, there's a considerable overlap between the smartest bears, and stupidest humans, when it comes to storing and/or raiding food and garbage. In the AI field, we've seen specific problems, applications, domains, or however you'd choose to call them fall into the class of those in which artificial search (or artificial intelligence, though "search" may be more accurate in the sense of "search through problem space to a useful solution) routinely bests humans, including checkers (trivial), chess (challenging), go (even more so), and now creative endeavours such as image, music, and text generation.

      (A professional classical musician friend recently told me directly that at least some of the AI compositions they're encountering are not only good but show what can only be described as strong musical content and coherence as compared to the classical tradition. I'd think that the standards for popular music with its general simplicity would be far less challenging, their assessment in this case strikes me as notable.)

      I'll also note I'm not especially enthusiastic about AI's potential. The field has seen many periods of apparent rapid progress followed by very long, often decades-long, "winters". Recent progress, say, 2023 onward, has been spectacular, but also seems to be somewhat stalling out and showing profound limits. That isn't to say that new approaches might come up with greater capabilities, cheaper approaches, or both. China's DeepSeek, and the story of human intelligence including the shrinking of the braincase over recent evolution despite greater apparent intelligence suggests that efficiency gains may well be the path forward, perhaps utilising something akin to Chomsky's "universal grammar" or grammar hierarchy, or notions of parsing and grouping patterns within the human brain, whether through genetic inheritance, direct experience, or education, might be ways of drastically reducing size and analysis requirements of training corpora. I think it's Krakauer again (this time in a Complexity episode) who notes that total training set humans require to acquire basic linguistic skills by, say, age 5, is roughly 5 MB of data. This is phenomenally less than current LLM AI models require, and strongly suggests far greater possible efficiencies.

      Another factor, discussed in the recent Complexity series, is that humans of course not only from reading texts, but from observing and interacting with our environment. That's something AI presently does relatively little of, as I understand it, though certain domains (e.g., autonomous vehicles) may be applying this method. I'm not following progress on this at all presently, though I suspect I should.

      • econ a day ago

        The 5mb baby has me wonder.. In humans reasoning and memorizing have overlapping usefulness. Ai is so incredibly good at the later reasoning might be extremely undeveloped. In rare occasions I've seen top human students fail to question things they've learned that would have been very obvious to someone with an IQ under 60. An accidental lack of interaction with the data.

CaptainFever 2 days ago

I compared stone soup to AI before, but for very different reasons. That is, you cannot convince the villagers to contribute their data/food by appealing to them; rather, you have to trick them into giving up their data/food. But the result is bigger than the sum of its parts, including the villagers (if they are willing to drink the soup).

IRL, the trick is ToSes and pro-AI laws. Meanwhile, some villagers may be willing to contribute in the first place: those will be free culture advocates, public domain advocates, pirates, etc.

aamar 2 days ago

I would ask anyone making these kinds of deflationary arguments to explain if the same argument can be applied to the best of human creative work. Humans also use the raw materials of others, whether that’s words, musical scales, genres, idioms, or anecdotes.

Where is the line between recapitulation and innovation? Is it a line that we think current LLMs are definitely not crossing, and definitely will not cross in the near future? If so, make that argument.

  • beepbooptheory 2 days ago

    From TFA:

    > To be fair, although the story is intended to be debunking, the folktale also has a positive moral that applies to AI. The collective resources of many humans can make something that no individual could, and that really is magical.

    Its not deflationary, its just about reframing, reattributing what is so impressive about LLMs. We get so caught up in the tech itself, that it exists at all, understandably considering the way the discourse goes, we don't stop to appreciate how its even possible at all; that is, all of us (broadly).

    So many people just cant get past Sci-Fi mentality, they make the current AI into a kind of weird but promising baby, but we can also, much more easily and nicely, consider it a beautiful reflection of human writing at large.

    And whats even with all this constant pressure for it be more than that? All the arguments, philosophical gotchas, weird Skinnerism... Its like you're given a perfectly good hamburger and all you can say is "this is pretty much a steak if you squint".

    • aamar 2 days ago

      Even with that paragraph, I still interpret the essay as deflationary. Even though the stone has some role to play (as a social trigger), it’s materially different than the carrots, onions, etc. (which provide actual nutrition and flavor). We can draw clear distinctions. The question is whether this difference-in-kind is real in the case of AIs.

      I’d respond the same way to your hamburger vs. steak analogy. Sure, sometimes the LLM gives us a fine burger and not a steak, and it’s best for us to have the right attitude in that case.

      But if LLM’s can produce “steaks” (that is, whatever talented humans do) in the imminent future, that has _enormous practical impact_.

  • myflash13 a day ago

    The line is the invention of actual, new, knowledge. LLMs have so far failed to do that. LLMs have not made any significant new discovery in any field. See Dwarkesh's Question: https://marginalrevolution.com/marginalrevolution/2025/02/dw...

    If AI was actually intelligent, it should've cured cancer by now, based on the amount of data that it was fed.

  • sdwr 2 days ago

    > Good artists copy, great artists steal

kridsdale1 2 days ago

The thesis appears to be that the CEOs are hoodwinking the populace in to giving up their cultural wealth to build proprietary systems.

But TFA also mentioned Wikipedia. Crowd-RLHF trained models are the same. The people know they are volunteering their own labor and information to improve the model because the model gives them value and they want to share the value with humankind.

Everybody enjoys the soup.

  • xerox13ster 2 days ago

    We gave Wiki the info. AI took the info. These things are not the same.

    • gkbrk 13 hours ago

      Took the info? So the info is now gone and only the AI has it? Who deleted the original, the AI did that too?

    • CaptainFever 2 days ago

      You cannot take information, as it can only be duplicated.

dosinga 2 days ago

It's a nice story and I get the bit about the culture, but it does overlook the fact that you don't need the stones while you very much do need the LLMs.

antonkar a day ago

Another metaphor: We can build the Artificial Static Place Intelligence – instead of creating AI/AGI agents that are like librarians who only give you quotes from books and don’t let you enter the library itself to read the whole books. Why not expose the whole library – the entire multimodal language model – to real people, for example, in a computer game?

To make this place easier to visit and explore, we could make a digital copy of our planet Earth and somehow expose the contents of the multimodal language model to everyone in a familiar, user-friendly UI of our planet.

We should not keep it hidden behind the strict librarian (AI/AGI agent) that imposes rules on us to only read little quotes from books that it spits out while it itself has the whole output of humanity stolen.

We can explore The Library without any strict guardian in the comfort of our simulated planet Earth on our devices, in VR, and eventually through some wireless brain-computer interface (it would always remain a game that no one is forced to play, unlike the agentic AI-world that is being imposed on us more and more right now and potentially forever)

n4michael 2 days ago

> To be fair, although the story is intended to be debunking, the folktale also has a positive moral that applies to AI.

So maybe in this story, LLMs are not so much the stones (trickery) but rather the pot (the unlocking technology).

pzh 2 days ago

This comparison overlooks the fact that, in the original folktale, the stone soup remains a soup —- it never turns into a ribeye steak. Similarly, in the AI version, an LLM will always remain an LLM.

  • doitLP 2 days ago

    But everyone benefits by eating the soup and has a good time partying together.

    The last page of the soldiers running away before the town realizes they were tricked is interesting though.

bigfishrunning 2 days ago

AI is only stone soup if a) you get charged for the soup after adding your carrots and b) they heat the water by burning your house down

wbakst 2 days ago

i like this so much

"stone soup" could be seen as a trick (to get the villagers to provide that which they were previously unwilling), but i like that it's multiple different villagers who provide individual ingredients -- it's the coming together of everyone and their individual contributions that ultimately makes the soup so good

  • dredmorbius 2 days ago

    My take on "Stone Soup" is that it was written as an allegory for cooperation, as well as, perhaps, a guide to how to induce it in the face of reluctance.

    Of course, intent and outcome can differ, and Gopnik takes the piece to a new place. But then, that's also in the spirit of the original as I read it (individual ingredients creating a greater whole).

    And of course, as with all metaphor and allegory, there are limits to the comparison. But utility as well, and the point that AI LLMs require significant additions on top of the LLM-trained stones bears pointing out.

  • scrumper 2 days ago

    I first read this story in the back of the manual for a DOS program called Fractint in the very early '90s. It was a super-fast fractal generator made by a collective called the Stone Soup Group. It's still around but the SSG disappeared years ago.

    The story stuck with me, I told it to my kids only a few weeks ago.

topherjaynes 2 days ago

Gopnik is a great writer, and this is a very good take. She has a great sense of how to bring psychology to tech. Also fun fact: She's married to Alvy Ray Smith for all the computer graphics/pixar fans out there. I'd love to here them debate tech takes!

Hasu 2 days ago

A couple of thoughts:

1) The story of stone soup is the story of how some grifters got a free meal. I don't think it's moral instruction, or an example to be learned from, unless you are a grifter.

2) In the stone soup example and in cases like Wikipedia, the soup is freely shared with everyone, regardless of their contributions. Is AI like that, or in the AI stone soup story, are the travelers charging everyone for a bowl of soup? Doesn't that change the story quite a bit?

  • sdwr 2 days ago

    If you take off your cynicism-tinted glasses, it's the story of how community is more than the sum of its parts, and how it sometimes needs a "beautiful lie" as a catalyst (like justice, or freedom!)

    • Hasu 2 days ago

      If you think that community needs a group of strangers to con them into coming together and being more than the sum of its parts, you are more cynical than I am.

      • jimmaswell 2 days ago

        Society at large depends on the collective belief in society. It would stop existing tomorrow if everyone stopped pretending it existed. Laws, court rulings, road signs, it's all imaginary, but the collective illusion allows us to accomplish a lot more than than the alternative.

        • card_zero a day ago

          Numbers, language, boundaries between physical objects, all imaginary. Space, time, meaning, France, you name it. Alternatively: all real.

    • Terr_ 2 days ago

      While I can see how it can be retold that way, the core plot-mechanic is still (A) fraud by pot-stirrers and (B) greed by participants.

      At each step, the participant (especially the first) is deliberately misled to believe that they can secure valuable soup for less-valuable ingredients.

      It is not an appeal to their better nature--in many tellings the travelers have already tried that--but an appeal to their baser nature. For it to be a positive story, one must accept that the ends have somehow justified the means.

      • sdwr 2 days ago

        That's a good point!

htrp 2 days ago

This was one of the talks at Neurips 24 in Dec, highly recommend

rcpt 2 days ago

> We have a magic algorithm that will make artificial general intelligence from just gradient descent, next-token prediction, and transformers

Which exec is this?

gwern 2 days ago

This is a terrible analogy. Stone soup is disanalogous to generative AI models in almost every way that could matter. This analogy offers no insight, and at best comes off as a pretext for redistributive policies: "you mean our generative AI models" --Bugs Bunny

A stone soup does nothing by itself. It just sits there. LLMs do not just sit there: Claude and o1/o3 and r1 are increasingly active agents. Gopnik just plain ignores this, and indeed, says already obviously false things like "For some time, I’ve argued that a common conception of AI is misguided. This is the idea that AI systems like large language and vision models are individual intelligent agents, analogous to human agents. Instead, I’ve argued that these models are “cultural technologies” like writing, print, pictures, libraries, internet search engines, and Wikipedia." But last I checked, I couldn't ask 'a picture' to go research anime for me and summarize the results, nor could I put it in a self-improving loop to learn how to solve advanced math problems I can't even understand.

Ingredients in soups are used up and destroyed, and can only contribute to one soup; copies of text do none of that.

The villagers had to go out of their way to proactively add ingredients to the soup - indeed, that is the entire point of the original moral! OP seems to think that people donated all the data that they had "stashed away on the Internet" (what a phrase). Whereas with generative models, they don't, and in fact, that's a big reason many people are so mad.

Each ingredient added to the stone soup makes up a meaningful percentage of the output; any single contribution to generative models at the billions-scale is usually invisible and that contribution can be omitted without any measurable change on just about any metric.

Ingredients in soup may be tastier, but they are generally not meaningfully more nutritious. A potato cooked on its own is as nutritious and full of calories as it would be in the soup.

Soups can only be eaten, and eaten once. LLMs do a lot of things, which we are still discovering, and are reusable indefinitely, and are already spreading out into fields no one dreamed of (like in psychology & economics, increasingly more research is doing 'in silico' with LLMs as humans).

The villagers also donated their ingredients for free. The stone itself does nothing and the 'soup maker' likewise does barely nothing and contributes no ingredients but the stone. LLM trainers spend literally tens to hundreds of billions of dollars, including billions spent collectively on creating data through Scale etc. (Rumor has it that OA and Anthropic alone are spending hundreds of millions of dollars on expert programmers and other PhD specialties and that this is part of why their models are so much better.) Notably, the little mention of 'Kenya' implies they do it for free as they "jump at the chance" - obviously, the actual Kenyan villagers are very interested in being paid.

Making the soup doesn't make making future soups cheaper nor does it make the future soups tastier; making LLMs drives experience curves which are some of the fastest ever documented, which is why the cost of high-quality outputs has dropped by multiple orders of magnitude in just years, outpacing Moore's law.

So... no. There is pretty much no way in which LLM is like a 'stone soup', except in the vague sense that both involve a lot of humans at some point, I guess.

  • dredmorbius 2 days ago

    I'll grant that LLMs do more than just sit there, but what Gopnik is pointing out is that, at least as of last August (in a very rapidly progressing field), they also do not, of themselves approach AGI without the addition of numerous other ingredients, particularly training data, alignment, and prompt engineering.

    One could argue that in the original allegory, the stones don't merely sit either, but contribute (with some clever persuasive rhetoric) to facilitating the cooperation required to brew a compelling stew.

    All analogies melt if you push them loudly enough.

    • gwern a day ago

      > They also do not, of themselves approach AGI without the addition of numerous other ingredients, particularly training data, alignment, and prompt engineering.

      That's also not true. A base LLM like GPT-3, without any (specialized) training data, alignment, or prompt engineering, embodied quite a bit of agency (able to answer questions, simulate coding, or take actions in a while-loop), and in principle, can be AGI if scaled up. Consider Gato for an example: no alignment, no prompt engineering, just a GPT on data. This is unlike a stone soup which is completely un-nutritious without the villagers donating all the edible ingredients to make an actual soup, where the stone itself does nothing (a LLM definitely does something); the two scenarios are different qualitatively, not just quantitatively. Gopnik's analogy and goal for her analogy are both irretrievably flawed.

      > All analogies melt if you push them loudly enough.

      This analogy didn't melt after being pushed too far. It caught on fire spontaneously in the garage while no one was looking before half the parts arrived or had been unpacked.

tehjoker 2 days ago

I think the tension about AI is that yea, it is a reflection of all of our contributions, but the benefits are privatized and potentially used to deprive people of sustenance via automating their jobs. The problem is capitalism, not the technology itself.

Science and technology can be used for social good. They are the product of all of our efforts and knowledge combined with labor, yet companies make big bucks selling them back to us while also depriving people of things that they need, unless they are fortunate enough to pay, and then they provide them in the most blood sucking way possible.

rezmason 2 days ago

I think that if we tried, we could come up with a pretty large cookbook of stone soups.

  • kridsdale1 2 days ago

    Every soup is a Stone soup if you consider the metal pot as a stone.