this post was submitted on 07 Oct 2023
8 points (100.0% liked)
Django
419 readers
8 users here now
Django Project
- Django Project, the web framework for perfectionists with deadlines
- Django Code of Conduct
- Django Documentation
- Django Blog/News
- Support Django
Django Community
- Discord
- Unofficial Django Discord
- Project forum
- StackOverflow
- #django IRC Channel
- Django Users mailing list
- Django Developers mailing list
- Jazzband, a community maintaining Python projects
- Django Commons, organization dedicated to supporting the community's efforts to maintain packages
- Djangonaut Space
Django Ecosystem
- Built With Django
- Django Packages
- Django.WTF
- djangosnippets
- Awesome Django
- Django News
- Django Videos on pyvideo
- Django TV
- Upgrade Django
Jobs
Learning/Docs
- Django Project tutorial
- Django Girls Tutorial
- DjangoBook
- Classy Class-Based Views
- Classy Django REST Framework
- Classy DB
Podcasts:
Related Fediverse communities
- #django on Mastodon
- c/Python on programming.dev
- c/PostgreSQL on programming.dev
Feeds
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I didn’t look super closely at the example app, but it looks like it is comparing a sync vs async view? Or wsgi vs asgi? I think even with a single wsgi worker running sync code you should be able to handle more than one request at once if they’re IO bound (e.g. waiting on db queries), since each request runs in its own thread. Although that might still require using a different kind of gunicorn worker to make it use a thread per request like that.
To answer your original question, we “sort of” use async in production, but only by using asgi and the uvicorn worker class. All of our view/orm code is sync still. We do not get a very large volume of requests at all but I haven’t seen any performance issues due to concurrent or long-running requests using this setup for a few years now.