Related: I know that many people use AI image generators to make pixel art, and recently I've stumbled upon a great tool to make a proper pixelart based on AI generated input see https://github.com/jenissimo/unfake.js and live demo on https://jenissimo.itch.io/unfaker
(disclaimer: I don't know the author, just thought I'd share as I find it amazing)
rvnx 15 hours ago [-]
Very nice to see that this project is hand-crafted and not AI-generated like 99% of the submissions here
So, congrats on your release.
ofrzeta 7 hours ago [-]
When I clicked I already thought about the comments that ask "is this vibe coded". So I kind of asked myself that question. As someone who manually codes as well as experiments with AI-assisted coding I ask myself what attitude we should develop towards AI-assisted coding in the long run. Right now on HN it almost seems like "AI shaming" at work. If you post a project that's a result of using AI you can expect a lot of critique around here. While I understand that to a certain extent I guess we also need to overcome that sentiment. After all we don't blame people using IDEs, code completion or other tools that have become the norm.
itsoktocry 10 minutes ago [-]
>Right now on HN it almost seems like "AI shaming" at work.
HN leans "old school". It's the Angry Nerd trope; Comic Book Guy from the Simpsons.
The people doing "AI shaming" or claiming that "AI doesn't work" are going to have their lunch eaten.
latexr 4 hours ago [-]
> After all we don't blame people using IDEs, code completion or other tools that have become the norm.
Because those don’t have the same issues. It’s not like IDEs, LSPs, and other tools were the target of warranted criticism and then we stopped. Rather, they never received this kind of backlash in the first place.
No IDE has ever caused millions of people absolutely unrelated to it to have to ration water.
To use an exaggerated analogy, it’s like saying “people are complaining about arsenic being added to food but we need to overcome that sentiment, after all we don’t blame people adding salt and pepper which have become the norm”.
monsieurbanana 3 hours ago [-]
If that's the reason why people dunk on ai-assisted programming, fine.
That's not the impression I had though, the criticism I usually see is around laziness, low-effort submissions, etc... Which are not inherent issues of using llms.
latexr 3 hours ago [-]
> Which are not inherent issues of using llms.
But they are exacerbated by them, so the criticism still stands. No one visits HN for low-quality same-loking submissions. It’s like frequenting r/toolgifs and suddenly almost every post is about one specific hammer. That’d be understandably annoying, and while not the inherent fault of the hammer, it would be an issue only possible because it exists.
monsieurbanana 2 hours ago [-]
I don't disagree, it's annoying. But what's the solution here? Bashing quality submissions because they use AI?
Even if LLMs don't succeed in their seemingly ultimate goal of replacing humans, and I don't think they will, there's no future where we completely stop using them.
I guess either we find out a way to filter out ai-slop or we wait until people are tired of rehashing the same low-effort criticisms.
Now you seem like one of the few people that is concerned over environmental issues and I respect that, if that's why people are against them it's a whole other discussion and we can disregard anything I said here.
cdrini 3 hours ago [-]
TLDR: That article is pretty low quality, and the "caused millions of people absolutely unrelated to it to have to ration water" doesn't seem like a reasonable conclusion. It's not mentioned at all in the source article. I took some notes on this article and traced back the research to the original article by The Austin Chronicle which is significantly better: https://www.austinchronicle.com/news/2025-07-25/texas-is-sti... , would recommend.
Main takeaways:
- Why are we building data centres so close to the equator where it's hot.
- It's depressing to see the high quality reporting from The Austin Chronicle watered down into more and more clickbaity soundbytes as it gets recycled through other "news" orgs. But at the same time, I wouldn't have heard about it otherwise.
- The water evaporation was interesting to me, and would love to read more on what percentage evaporates, and whether the Stargate plans to build non-evaporative cooling will actually hold out and how that'll impact the water grid.
- Would love some more info/context on that 463 mil number, but stopping my research here for now. Combining this with when/how often Texas has to ration its water would provide a stronger argument in support/against the provided claim of water rationing.
- The fact that we don't have good numbers for how much water data centres are using is crazy, we need that level of granularity/regulation.
- Markers of poor reporting:
- Numbers without context/clarity. Would it kill these sites to include a bar chart.
- Citations of sites that market engaging/entertaining
- Ambiguous / contradictory data
- Ambiguous references
Notes:
Interesting article! A few weird things:
1. The most cited reference is to a site called "Techie + Gamers", which self-describes itself as "TechieGamers.com is a leading destination for engaging entertainment coverage, news, net worths and TV shows with a strong focus on Shark Tank." Makes me suspicious of the journalistic quality of this and that article.
2. In the headline it says "Texas AI centers guzzle 463 million gallons". Further down it says "According to a July 2025 investigation by The Austin Chronicle, data centers across Central Texas, including Microsoft and US Army Corps facilities in San Antonio, used a combined 463 million gallons of water in 2023 and 2024 alone, as reported by Techie + Gamers." Over 2023 and 2024? Odd that it's giving the sum over 2 years. And not sure what it means that it includes the US Army? Also without any context I don't know what this number means.
- I checked the TechieGamers article and this contradicts what is written there, which says the 463 million number is for San Antonio alone.
3. Robert Mace, executive director of The Meadows Center for Water, notes that "once water evaporates, it's gone." This is interesting, not sure how much water is actually evaporated vs returned to the grid.
4. "The scale of water use is massive, as the Texas Water Development Board projections estimate that data centers in the state will consume 49 billion gallons of water in 2025, soaring to nearly 400 billion gallons by 2030, as per Techie + Gamers report. That’s about 7% of Texas’s total projected water use, according to the report."
- Mixed citations here, not sure whether these numbers are from Texas Water Development Board or Techie + Gamers. Also they project an increase from ~232 million gallons/year in 2024 to 49 billion in 2025? That's a 200x increase. And they expect a further ~8x increase from 2025 to 2030 to 400 billion? Or is it because the original number was only for Central Texas?
- 7% of what? The 2025 number or the 2030 number?
- Again subtle contradictions with TechieGamer which says "a white paper submitted to the Texas Water Development Board projected that data centers in the state will consume 49 billion gallons of water in 2025. That number is expected to rise to 399 billion gallons by 2030, nearly 7% of the state’s total projected water use.". So it's not the Texas Water Development Board but a whitepaper submitted to the board? Not sure who made these numbers now.
5. "Much of the water these centers use evaporates during cooling and can’t be recycled, a critical issue in an area already grappling with scarce water resources, as reported by Techie + Gamers."
- Again really want more info/numbers on this.
The root article seems to be from "The Austin Chronicle" :
1. This starts with "After Donald Trump and Elon Musk’s public breakup, Sam Altman replaced Musk as the president’s new favorite tech guy. Altman, the CEO of OpenAI, has become something like Musk’s archnemesis on the rapidly developing stage of artificial intelligence in Texas." This doesn't seem accurate with my reading of the news, and is so colourful that it makes me question the journalistic quality of this article.
2. The reporting across the three sources is mixed on who they're blaming. Economic Times doesn't even mention OpenAI and calls it "Microsoft's Stargate campus". Techi Gamers uses this phrase, but also later says "Microsoft has partnered with OpenAI". And The Austin Chronicle doesn't mention Microsoft at all and focuses on OpenAI. And the Wikipedia page for Stargate says "joint venture created by OpenAI, SoftBank, Oracle, and investment firm MGX." ?
3. I take it back reading it further this article is _significantly_ better than the others, with many more reputable sources.
4. Finally we get some real sources!! The 49 billion 2025 and 400 billion 2030 numbers are from HARC, Houston Advanced Research Center. And the 7% is actually 6.6%, and relative to the 2030 projection.
5. Finally real info on evaporation!! Still no numbers but we get a description of the process:
> Most data centers use an evaporative cooling system, in which the servers’ heat is absorbed by water. The heat is then removed from the water through evaporation, causing the water to be lost as vapor in the air. The cooler water then goes back through the machines, and this loop is regularly topped off with fresh water. After all, evaporation renders the water saltier and unusable after four or five cycles. “Then they dump the water, and it goes down the sewer,” Mace said.
> ...
> The Abilene Stargate campus will reportedly use a closed-loop, non-evaporative liquid cooling system that requires an initial refill of around 1 million gallons of water, with “minor” maintenance refills. Cook is skeptical that such closed-loop systems will use as little water as they suggest. It’s not possible, Cook says, to use the same water over and over again, recycled infinitely, to cool servers.
6. This article doesn't mention the 463 mil anywhere, which makes me think that was original research from TechiGamers. They reference SAWS, San Antonio Water System, but again the numbers are without context, so would need to do some original research to get any meaningful insights from these numbers.
criddell 2 hours ago [-]
It would be similar to me posting “Show HN: I built a turbo encabulator in Rust” and I actually hired coders from Craigslist to bring my idea to life.
globular-toast 6 hours ago [-]
If I can tell something is "vibe coded", that means it's bad. It doesn't matter what tools people use as long as the output is good. Vibe coding smells include:
1. Tons of pointless comments outlining trivial low-level behaviour,
2. No understanding of abstraction levels,
3. No real architecture at all,
4. Not DRY, no helper functions or inconsistent use of said functions across project,
5. Way too many lines of code.
None of these are shaming for use of any particular tool, they are just shaming the output.
ofrzeta 6 hours ago [-]
Ok, let's better not talk about "vibe coding" because we don't really have definition of what it means. "Historically" it means "just letting the AI code without looking at its output" while I often see people that are more diligently using AI using it kind of tongue in cheek. My mistake using the expression in the latter way.
vnhrmth 5 hours ago [-]
It's really odd now that we look for more human code rather than AI Generated code, and I think this is going to be increasing in every form of data that's out there.
danterolle 13 hours ago [-]
Thanks! Although I had to use it for some things (like the logo, for example, and I’m not a "graphic guy"), in the end, since it’s a simple project by design, I didn’t mind, and the result isn’t bad at all.
llbbdd 8 hours ago [-]
Genuinely why do you care?
SanitaryThinkin 8 hours ago [-]
This may not be entirely the right metaphor but I kinda see it as the difference between fast food, a top rated restaurant, and home made cooking —with fast food being AI.
Generic, does the job, not the highest quality, bleak, fast repetitious output
wiseowise 4 hours ago [-]
It literally doesn't matter, if your product sucks. Only end result for the user matters.
robertlagrant 5 hours ago [-]
While I agree, because I like writing code, I do wonder if this is how assembly writers felt when automated compilation started to take off.
HelloUsername 7 hours ago [-]
> There are several Pixel Art Editors that do the same things and even much more, but many require an account registration or the insertion of an e-mail or have a certain business model.
Latest Aseprite is still available with free (as in beer) source code to compile, even if it is a bit heavy on the dependencies these days, including requiring that you install a special fork of Skia iirc. I paid for it to get the pre-compiled binaries for Windows, but on Linux and OSX I always compiled it myself anyway. On FreeBSD, that is my desktop OS of choice now, I use the ancient open source version of Aseprite since that is what is most convenient to install (from the port). Maybe I should try Libresprite instead.
For my programmer art I also use old (Autodesk) Animator (in DOSBox) a lot. It is small and runs anywhere. Perfect for doodling on my phone, with some configuration to add various on-screen buttons in DOSBox. Small enough (less than 1 MB) that the entire application plus all configuration and saved working files can go into every source code repository where I want to edit some pixel art. https://github.com/AnimatorPro
Also have VGA Paint 386 installed in DOSBox everywhere. Have not used it much, but it seems good (probably more interesting for those that want something closer to a Deluxe Paint clone). https://www.bttr-software.de/products/vp386/
Going to have a look at Tilf as well, to see if it is not too much work to get it to run in FreeBSD. Not being an expert in drawing anything, it helps to have many tools and switch between, as all tend to have something they do better (or easier) than the other ones.
danterolle 52 minutes ago [-]
> Going to have a look at Tilf as well, to see if it is not too much work to get it to run in FreeBSD
It should work without any issues, as long as there’s a Python interpreter you can definitely run it. If needed, let me know and I’ll try to work on it. I have plenty of other ideas to implement as well.
Why, of all names possible, you thought TILF was the best one?
danterolle 2 hours ago [-]
There’s no specific reason, I just liked the idea of a little elf/goblin having the freedom to draw whatever they wanted. At first, I wanted to call this project "Folletto" (in Italian, that means elf), but then I thought it would be better to keep an extremely simple name: a tiny elf who picks up his pencil and starts drawing. Tiny Elf.
vorticalbox 6 hours ago [-]
Tile I’d Like to Fill.
runjake 8 hours ago [-]
It’s “Tilf” not TILF, and means Tiny Elf, per the docs.
Why? What’s the problem with it?
BriggyDwiggs42 7 hours ago [-]
Milf, dilf, etc
johnisgood 2 hours ago [-]
I did not make the association. Is something wrong with me?
bitwize 7 hours ago [-]
Wait till you see the image editor named after a kind of BDSM bottom.
renegat0x0 5 hours ago [-]
Please provide github topics (tags) for the project. It may boost your project discoverability. I often use it with github search to find interesting projects in "topic".
danterolle 52 minutes ago [-]
Done. Thanks!
hug 12 hours ago [-]
Great project!
I have one very silly question... Why is the elf logo not pixel art? :)
danterolle 2 hours ago [-]
I’m not a graphic designer guy, I wouldn’t know where to start, but maybe in the future I could use Tilf to draw the logo! My highest artistic expression, for now, will probably be redrawing some Earthbound characters to experiment with SDL3.
zamadatix 13 hours ago [-]
I like that it really is simply built and packaged, I'm sure it was fun to hack away at. There's something about gluing together a million packages which sucks the fun out of tinkering (for me, at least).
danterolle 13 hours ago [-]
That’s also why the project was built from scratch. The only real dependency of the project is PySide6. The icons don’t come from any package. PyInstaller is used solely for bundling purposes. As outlined in the README.md, running Tilf requires nothing more than an installed version of Python (3).
lardbgard 54 minutes ago [-]
Hehe no ai
bitwize 6 hours ago [-]
Much "an app can be a home-cooked meal" energy here. Write a program to scratch an itch. Good to see that spirit still alive.
danterolle 2 hours ago [-]
Thank you!
mouse_ 14 hours ago [-]
Congratulations!
What made you decide to go with PySlide6?
ethan_smith 4 hours ago [-]
PySide6 is a solid choice for Python desktop apps - Qt's rendering capabilities make it ideal for pixel-perfect graphics manipulation while avoiding the performance issues that can plague Tkinter or the dependency complexities of wxPython.
danterolle 13 hours ago [-]
I already have some experience with Python/PySide6, and I was mainly interested in having a working prototype as soon as possible (I’m experimenting with SDL3 and animating squares isn’t exactly thrilling!). Plus, Qt widgets integrate very well with Python, it is so easy to create a section, especially when the documentation is well written, that helps a lot. Also, with PyInstaller, the build process for each platform is fairly straightforward (although for customized icons, there are a few extra steps to take).
There are some downsides of course (like the bundle size, for example), but that's not a problem, the core idea is: double-click on Tilf and start drawing right away.
synergy20 12 hours ago [-]
why not just the default tk widgets, might be much less of external dependencies?
danterolle 2 hours ago [-]
I simply never used it, nor did I ever feel the need. I moved on from Tkinter to PySide6.
zoba 12 hours ago [-]
I recently discovered and have been fairly happy with PixelLab - an AI pixel art generator. I feel like they have a ways to go in features and UX, but it shows promise.
So, congrats on your release.
HN leans "old school". It's the Angry Nerd trope; Comic Book Guy from the Simpsons.
The people doing "AI shaming" or claiming that "AI doesn't work" are going to have their lunch eaten.
Because those don’t have the same issues. It’s not like IDEs, LSPs, and other tools were the target of warranted criticism and then we stopped. Rather, they never received this kind of backlash in the first place.
No IDE has ever caused millions of people absolutely unrelated to it to have to ration water.
https://archive.ph/20250731222011/https://m.economictimes.co...
To use an exaggerated analogy, it’s like saying “people are complaining about arsenic being added to food but we need to overcome that sentiment, after all we don’t blame people adding salt and pepper which have become the norm”.
That's not the impression I had though, the criticism I usually see is around laziness, low-effort submissions, etc... Which are not inherent issues of using llms.
But they are exacerbated by them, so the criticism still stands. No one visits HN for low-quality same-loking submissions. It’s like frequenting r/toolgifs and suddenly almost every post is about one specific hammer. That’d be understandably annoying, and while not the inherent fault of the hammer, it would be an issue only possible because it exists.
Even if LLMs don't succeed in their seemingly ultimate goal of replacing humans, and I don't think they will, there's no future where we completely stop using them.
I guess either we find out a way to filter out ai-slop or we wait until people are tired of rehashing the same low-effort criticisms.
Now you seem like one of the few people that is concerned over environmental issues and I respect that, if that's why people are against them it's a whole other discussion and we can disregard anything I said here.
Main takeaways:
- Why are we building data centres so close to the equator where it's hot.
- It's depressing to see the high quality reporting from The Austin Chronicle watered down into more and more clickbaity soundbytes as it gets recycled through other "news" orgs. But at the same time, I wouldn't have heard about it otherwise.
- The water evaporation was interesting to me, and would love to read more on what percentage evaporates, and whether the Stargate plans to build non-evaporative cooling will actually hold out and how that'll impact the water grid.
- Would love some more info/context on that 463 mil number, but stopping my research here for now. Combining this with when/how often Texas has to ration its water would provide a stronger argument in support/against the provided claim of water rationing.
- The fact that we don't have good numbers for how much water data centres are using is crazy, we need that level of granularity/regulation.
- Markers of poor reporting:
- Numbers without context/clarity. Would it kill these sites to include a bar chart.
- Citations of sites that market engaging/entertaining
- Ambiguous / contradictory data
- Ambiguous references
Notes:
Interesting article! A few weird things:
1. The most cited reference is to a site called "Techie + Gamers", which self-describes itself as "TechieGamers.com is a leading destination for engaging entertainment coverage, news, net worths and TV shows with a strong focus on Shark Tank." Makes me suspicious of the journalistic quality of this and that article.
2. In the headline it says "Texas AI centers guzzle 463 million gallons". Further down it says "According to a July 2025 investigation by The Austin Chronicle, data centers across Central Texas, including Microsoft and US Army Corps facilities in San Antonio, used a combined 463 million gallons of water in 2023 and 2024 alone, as reported by Techie + Gamers." Over 2023 and 2024? Odd that it's giving the sum over 2 years. And not sure what it means that it includes the US Army? Also without any context I don't know what this number means.
- I checked the TechieGamers article and this contradicts what is written there, which says the 463 million number is for San Antonio alone.
3. Robert Mace, executive director of The Meadows Center for Water, notes that "once water evaporates, it's gone." This is interesting, not sure how much water is actually evaporated vs returned to the grid.
4. "The scale of water use is massive, as the Texas Water Development Board projections estimate that data centers in the state will consume 49 billion gallons of water in 2025, soaring to nearly 400 billion gallons by 2030, as per Techie + Gamers report. That’s about 7% of Texas’s total projected water use, according to the report." - Mixed citations here, not sure whether these numbers are from Texas Water Development Board or Techie + Gamers. Also they project an increase from ~232 million gallons/year in 2024 to 49 billion in 2025? That's a 200x increase. And they expect a further ~8x increase from 2025 to 2030 to 400 billion? Or is it because the original number was only for Central Texas?
- 7% of what? The 2025 number or the 2030 number?
- Again subtle contradictions with TechieGamer which says "a white paper submitted to the Texas Water Development Board projected that data centers in the state will consume 49 billion gallons of water in 2025. That number is expected to rise to 399 billion gallons by 2030, nearly 7% of the state’s total projected water use.". So it's not the Texas Water Development Board but a whitepaper submitted to the board? Not sure who made these numbers now.
5. "Much of the water these centers use evaporates during cooling and can’t be recycled, a critical issue in an area already grappling with scarce water resources, as reported by Techie + Gamers."
- Again really want more info/numbers on this.
The root article seems to be from "The Austin Chronicle" :
1. This starts with "After Donald Trump and Elon Musk’s public breakup, Sam Altman replaced Musk as the president’s new favorite tech guy. Altman, the CEO of OpenAI, has become something like Musk’s archnemesis on the rapidly developing stage of artificial intelligence in Texas." This doesn't seem accurate with my reading of the news, and is so colourful that it makes me question the journalistic quality of this article.
2. The reporting across the three sources is mixed on who they're blaming. Economic Times doesn't even mention OpenAI and calls it "Microsoft's Stargate campus". Techi Gamers uses this phrase, but also later says "Microsoft has partnered with OpenAI". And The Austin Chronicle doesn't mention Microsoft at all and focuses on OpenAI. And the Wikipedia page for Stargate says "joint venture created by OpenAI, SoftBank, Oracle, and investment firm MGX." ?
3. I take it back reading it further this article is _significantly_ better than the others, with many more reputable sources.
4. Finally we get some real sources!! The 49 billion 2025 and 400 billion 2030 numbers are from HARC, Houston Advanced Research Center. And the 7% is actually 6.6%, and relative to the 2030 projection.
5. Finally real info on evaporation!! Still no numbers but we get a description of the process:
> Most data centers use an evaporative cooling system, in which the servers’ heat is absorbed by water. The heat is then removed from the water through evaporation, causing the water to be lost as vapor in the air. The cooler water then goes back through the machines, and this loop is regularly topped off with fresh water. After all, evaporation renders the water saltier and unusable after four or five cycles. “Then they dump the water, and it goes down the sewer,” Mace said.
> ...
> The Abilene Stargate campus will reportedly use a closed-loop, non-evaporative liquid cooling system that requires an initial refill of around 1 million gallons of water, with “minor” maintenance refills. Cook is skeptical that such closed-loop systems will use as little water as they suggest. It’s not possible, Cook says, to use the same water over and over again, recycled infinitely, to cool servers.
6. This article doesn't mention the 463 mil anywhere, which makes me think that was original research from TechiGamers. They reference SAWS, San Antonio Water System, but again the numbers are without context, so would need to do some original research to get any meaningful insights from these numbers.
1. Tons of pointless comments outlining trivial low-level behaviour,
2. No understanding of abstraction levels,
3. No real architecture at all,
4. Not DRY, no helper functions or inconsistent use of said functions across project,
5. Way too many lines of code.
None of these are shaming for use of any particular tool, they are just shaming the output.
Generic, does the job, not the highest quality, bleak, fast repetitious output
https://libresprite.github.io/
For my programmer art I also use old (Autodesk) Animator (in DOSBox) a lot. It is small and runs anywhere. Perfect for doodling on my phone, with some configuration to add various on-screen buttons in DOSBox. Small enough (less than 1 MB) that the entire application plus all configuration and saved working files can go into every source code repository where I want to edit some pixel art. https://github.com/AnimatorPro
Also have VGA Paint 386 installed in DOSBox everywhere. Have not used it much, but it seems good (probably more interesting for those that want something closer to a Deluxe Paint clone). https://www.bttr-software.de/products/vp386/
Then there is https://orama-interactive.itch.io/pixelorama that is open source and seems to improve at a good pace. I just never took the time to look very close.
Going to have a look at Tilf as well, to see if it is not too much work to get it to run in FreeBSD. Not being an expert in drawing anything, it helps to have many tools and switch between, as all tend to have something they do better (or easier) than the other ones.
It should work without any issues, as long as there’s a Python interpreter you can definitely run it. If needed, let me know and I’ll try to work on it. I have plenty of other ideas to implement as well.
Why? What’s the problem with it?
I have one very silly question... Why is the elf logo not pixel art? :)
What made you decide to go with PySlide6?
There are some downsides of course (like the bundle size, for example), but that's not a problem, the core idea is: double-click on Tilf and start drawing right away.