• 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle








  • There are 3 use cases I’ve seen.

    • Making fossil fuel power stations “clean”.

    • CO2 recovery for long term storage.

    • CO2 for industrial use.

    It’s no good for the first, due to energy consumption. This is the main use I’ve seen it talked up for, as something that can be retrofitted to power plants.

    It’s poor for the second, since the result is a gas (hard to store long term). We would want it as a solid or liquid product, which this doesn’t do.

    The last has limited requirements. We only need so much CO2.

    The only large scale use case I can see for this is as part of a carbon capture system. Capture and then react to solidify the carbon. However, plants are already extremely good at this, and can do it directly from atmospheric air, using sunlight.


  • cynar@lemmy.worldtoScience Memes@mander.xyzEntropy? Never heard of it.
    link
    fedilink
    English
    arrow-up
    109
    arrow-down
    2
    ·
    5 days ago

    Just checked the numbers, for those interested.

    A gas power plant produces around. 200-300kWh per tonne of CO2.

    Capture costs 300-900kWh per tonne captured.

    So this is basically non viable using fossil fuel as the power. If you aren’t, then storage of that power is likely a lot better.

    It’s also worth noting that it is still CO2 gas. Long term containment of a gas is far harder than a liquid or solid.




  • LLMs can’t become AGIs. They have no ability to actually reason. What they can do is use predigested reasoning to fake it. It’s particularly obvious with certain classes of proble., when they fall down. I think the fact it fakes so well tells us more about human intelligence than AI.

    That being said, LLMs will likely be a critical part of a future AGI. Right now, they are a lobotomised speech centre. Different groups are already starting to tie them to other forms of AI. If we can crack building a reasoning engine, then a full AGI is possible. An LLM might even form its internal communication method, akin to our internal monologue.