Immich is great, I host it externally for family to contribute to an extended photo album.
It does seem to get weird backing up from my phone, as if it’s trying to backup items it’s backed up before. That doesn’t seem to show up as duplicate photos in the library though so hasn’t been a huge issue. Bigger fish to fry tbh.
Right now I’m working on integrating homepage with organizr then converting it to use authentik behind the scenes, with users using their plex oauth to get sso to the rest of their services.
Docker is amazing
It does seem to get weird backing up from my phone, as if it’s trying to backup items it’s backed up before.
That’s odd. I haven’t had that before, but I also don’t use the phone backup feature often. I’ve seen a lot of issues with it that seem to just be random occurrences that aren’t widespread, and sort of just pop out of nowhere only on a small set of devices, so I’m wondering if they just have to improve application stability a bit.
One thing that does drive me nuts though is timestamp shenanigans. Like I’ll have some photos taken on the same day at different times, and at a certain point it’ll just decide to label some of them in the timeline view as having occurred a day earlier or later than they actually did, even though when you view the image properties, it has the correct date.
Recent convert to immich and hugely impressed by the software and project - one of FOSS’s shining stars. Good work everyone.
Really great software. Works just as you’d want and has enough stylisism and flow too.
Immich sounds so awesome I plan to start using it soon.
My only warning for a new user to Immich is that it does not support chunk uploading. So if you’re like me and took a 1 hour 40GB 4k video, it will never upload. It will start, fail, and start over again forever.
Thanks for the warning i don’t think I have any hour long videos, maybe up to 10 or 20 minutes. But that is a very weird and annoying bug.
You’re welcome. It’s not a bug, but just the library they use to upload files doesn’t do chucking. There is a git hub request for it, but it’s not done yet.
I just set it up this week, I was just settling with nextcloud memories before. Night and day difference.
A few pain points in the process but overall was pretty easy to set up and even add 2FA (though I can’t say authelia was easy to set up to do so), and once it’s off the ground it’s super smooth
Got an authelia config a man can bum?
I can never get it working.
Absolutely. It was a pain in the ass to get up and running, but it’s running smooth with this setup. You can probably streamline and clean this up a bit but it’s working for me:
Also just to note, the caddyfile changes aren’t necessary for Immich, that’s just for any service without an integration that you still want to lock down. Immich’s integration is pretty straightforward once authelia itself is up and running.
Would depend on what reverse proxy you’re using, I saw they replied with Caddy, I set it up using Traefik instead
Is it still braking changes when upgrading to a newer version?
In the past it felt like I was running an alpha version, which I spend more time fixing it than enjoying its features.
It’s written all over their website that we have to expect breaking changes. This year they will release the first stable version tho.
Reminds me that now that all my data is processed (in particular the heavy ML part) I should move the resulting container data to my (much less powerful but always on) NAS.
If it helps, I have an ml container on my more powerful machine and have my Immich insurance pointing at that, then the local NAS container in order. If it’s on, it powers through (so I turn it on if I’m about to dump a batch of photos) and if it’s not it churns slowly through (e.g. if my phone uploads one or two).
It’s super easy to do! Would recommend.
Ah nice, I was aware of the remote ML instance option but I didn’t know it was optional, i.e. if it’s there rely on it, if not still work. I thought it was either do ML locally ALL the time or do ML remotely all the time.
Is it just an optional ML endpoint on the NAS instance pointing to the ML only container on the more powerful machine on the same LAN?
Been hosting on my Synology NAS for a while now. This is app kicks hard. I love it.
Is it easy to self-host immich so that it operates on a READ-ONLY basis with my images? I really only want to use it for the local-AI indexing/search, but not as a backup or photo management solution (Synology Photos works just fine for that).
Yes, and:
You can point Immich to your photo uploads as an external library, too. Then make a cron job to rescan regularly.
That being said, I now have my old photos as external libraries and new stuff directly in Immich. After using it a while, I realized that it’s just that good.
Yes, set the external library bind mount in the docker compose project to
:ro
(read only).Figured it out :) It’s indexing my media now!
Yes, that’s how I use it. It has access to a read only bind mount of my photo directory. The ML doesn’t write exif data to the images, just keeps that in its database.
I think you can use Immich external libraries for this, also to be extra safe you can just mount your external images folder as read only by adding
:ro
to the docker volume mount so that the container won’t be able to modify anything as a precaution.also to be extra safe you can just mount your external images folder as read only by adding :ro to the docker volume mount so that the container won’t be able to modify anything as a precaution.
This is what I was thinking, too.
Alright, looks like I’ll be setting it up soon! LOL
You can also try out photoprism for that. Immich is best for an all-in-one solution as a replacement for google photos.
Photoprism also has face recognition, maps, and many more features geared towards photography than immich.
I realized after using photoprism that I am too basic for that haha
I don’t think Photoprism has contextual search. Anyway, immich installed and running on my NAS 🤭
Yes, pretty easy - that’s exactly how I use it
Still requires docker?
I am curious why not docker? it s pretty convenient in my setup (docker compose + traefik). If I need to migrate it s really simple, if I am to nuke a service just bring it down and delete the path.
Not on NixOS!
services.immich.enable = true;
There’s some people who have managed to get it working under podman
I’m running it under podman within NixOS using compose2nix on a rpi5.
I’d rather use the default NixOS option (services.immich.enable), but nixpkgs-unstable doesn’t have all arm64 binaries prebuilt and building those can take a long time.
If you’re using Arch, the AUR package works well
I’m fully aware of the joy of containers, but I just don’t want all that extra faff
Extra faff?
The additional software required to run it in a container plus its configuration, on top of Immich’s configuration.
Just install & configure Immich, done.
Thats… Just… Docker compose… You copy, paste, docker-compose up and you’re done…
Or… <using package manager of choice>
install immich
Done.
No need to map internal & external ports, wrestle with permissions (or… good grief, run the container as root!), etc, etc.
It’s just… less faff.
Plus I save all that additional disk space, not having to install docker! 😉
Don’t get me wrong; Containers, chroot jails, Type-1 & Type-2 hypervisors all had their place in the history of my systems, I just don’t see it as a necessity.
I see. I still prefer docker for the semplicity tho.
Yeah. I’m also waiting for a native Ubuntu package, I don’t want to deal with Docker.
I’m planning to do it with podman. It’s supposed to be quite easy to convert between the two.
Can confirm, works without problems in rootless podman.
Can you give me some pointers? I’m still new to docker and podman; hoping to get this going without too much learning curve to start with!
Sure, I set it up in nixos though this is the short form of that:
spoiler
- Install Podman and passt + slirp4netns for networking
- Setup subuid and subgid
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 johndoe
- I’m using quadlet’s so we need to create those: $HOME/.config/containers/systemd/immich-database.container
[Unit] Description=Immich Database Requires=immich-redis.service immich-network.service [Container] AutoUpdate=registry EnvironmentFile=${immich-config} # add your environment variables file here Image=registry.hub.docker.com/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0 # hash from the official docker-compose, has to be updated from time to time Label=registry Pull=newer # update to newest image, though this image is specified by hash and will never update to another version unless the hash is changed Network=immich.network # attach to the podman network UserNS=keep-id:uid=999,gid=999 # This makes uid 999 and gid 999 map to the user running the service, this is so that you can access the files in the volume without any special handling otherwise root would map to your uid and the uid 999 would map to some very high uid that you can't access without podman - This modifies the image at runtime and may make the systemd service timeout, maybe increase the timeout on low-powered machines Volume=/srv/services/immich/database:/var/lib/postgresql/data # Database persistance Volume=/etc/localtime:/etc/localtime:ro # timezone info Exec=postgres -c shared_preload_libraries=vectors.so -c 'search_path="$user", public, vectors' -c logging_collector=on -c max_wal_size=2GB -c shared_buffers=512MB -c wal_compression=on # also part of official docker-compose.....last time i checked anyways [Service] Restart=always
$HOME/.config/containers/systemd/immich-ml.container
[Unit] Description=Immich Machine Learning Requires=immich-redis.service immich-database.service immich-network.service [Container] AutoUpdate=registry EnvironmentFile=${immich-config} #same config as above Image=ghcr.io/immich-app/immich-machine-learning:release Label=registry Pull=newer # auto update on startup Network=immich.network Volume=/srv/services/immich/ml-cache:/cache # machine learning cache Volume=/etc/localtime:/etc/localtime:ro [Service] Restart=always
$HOME/.config/containers/systemd/immich.network
[Unit] Description=Immich network [Network] DNS=8.8.8.8 Label=app=immich $HOME/.config/containers/systemd/immich-redis.container [Unit] Description=Immich Redis Requires=immich-network.service [Container] AutoUpdate=registry Image=registry.hub.docker.com/library/redis:6.2-alpine@sha256:eaba718fecd1196d88533de7ba49bf903ad33664a92debb24660a922ecd9cac8 # should probably change this to valkey.... Label=registry Pull=newer # auto update on startup Network=immich.network Timezone=Europe/Berlin [Service] Restart=always
$HOME/.config/containers/systemd/immich-server.container
[Unit] Description=Immich Server Requires=immich-redis.service immich-database.service immich-network.service immich-ml.service [Container] AutoUpdate=registry EnvironmentFile=${immich-config} #same config as above Image=ghcr.io/immich-app/immich-server:release Label=registry Pull=newer # auto update on startup Network=immich.network PublishPort=127.0.0.1:2283:2283 Volume=/srv/services/immich/upload:/usr/src/app/upload # i think you can put images here to import, though i never used it Volume=/etc/localtime:/etc/localtime:ro # timezone info Volume=/srv/services/immich/library:/imageLibrary # here the images are stored once imported [Service] Restart=always [Install] WantedBy=multi-user.target default.target
- systemctl --user daemon-reload
- systemctl --user enable --now immich-server.service
- enable linger so systemd user services run even if the user is logged of
loginctl enable-linger $USER
- Setup a reverse proxy like caddy so you can make access to it simple like immich.mini-pc.localnet
Docker to rootful podman is easy. Docker to rootless podman can get annoying due to the file permissions and slightly more limited networking
It’s only good for phone photos though. If you also take pictres with a camera, it doesn’t have any clear way to handle those.
It sure does, just put them in an external library and scan for them, that’s what I do and it works flawlessly
Hm, maybe I ought to look at it again then. Thanks for the tip.