SEOUL: Chinese AI app DeepSeek will not be available to download in South Korea pending a review of its handling of user data, Seoul authorities said on Monday (Feb 17). DeepSeek's R1 chatbot stunned investors and industry insiders with its ability to match the functions of its West
DeepSeek the self-hosted model is pretty decent even as distilled down to 8b, but I always ensure i get an abliterated version to remove all the Chinese censorship (and also built-in OpenAI censorship given the actual history of how the model was actually developed).
To be clear, that only removes (or attempts to remove) refusals; it doesn’t add in training data that it doesn’t have. Ask it about tiennemen square, for example.
The abliterated model of DeepSeek can fully discuss Tiananmen Square. I’ve tried to even use the 4chan copy paste that allegedly gets Chinese chat users session dropped and the prompts work fine
to;dr: It’s not Deepseek the model, it’s their app and its privacy policy.
DeepSeek the self-hosted model is pretty decent even as distilled down to 8b, but I always ensure i get an abliterated version to remove all the Chinese censorship (and also built-in OpenAI censorship given the actual history of how the model was actually developed).
To be clear, that only removes (or attempts to remove) refusals; it doesn’t add in training data that it doesn’t have. Ask it about tiennemen square, for example.
The abliterated model of DeepSeek can fully discuss Tiananmen Square. I’ve tried to even use the 4chan copy paste that allegedly gets Chinese chat users session dropped and the prompts work fine
Could you expand on this re how, please and thank you.
Sure. Here you go.
Much obliged!
Learned something new. Thank you!
You can also run your own fancy front-end and host your own GPT website (locally).
I’m doing that with docker compose in my homelab, it’s pretty neat!
services: ollama: volumes: - /etc/ollama-docker/ollama:/root/.ollama container_name: ollama pull_policy: always tty: true restart: unless-stopped image: ollama/ollama ports: - 11434:11434 deploy: resources: reservations: devices: - driver: nvidia device_ids: ['0'] capabilities: - gpu open-webui: build: context: . args: OLLAMA_BASE_URL: '/ollama' dockerfile: Dockerfile image: ghcr.io/open-webui/open-webui:main container_name: open-webui volumes: - /etc/ollama-docker/open-webui:/app/backend/data depends_on: - ollama ports: - 3000:8080 environment: - 'OLLAMA_BASE_URL=http://ollama:11434/' - 'WEBUI_SECRET_KEY=' extra_hosts: - host.docker.internal:host-gateway restart: unless-stopped volumes: ollama: {} open-webui: {}