One week later I could get some time to write down what we did and which ideas are behind the Barcelona Sprint 2016...
It all started two months ago when we where thinking about how we could organize Barcelona Sprint to be productive. We've already decided to create three teams, REST API, experimental new backend and frontend so we needed to see how a week sprint can be organized to reach some goals. At that moment I contacted Asko, Timo and Nathan to ask if they can lead each team and prepare a pre-sprint discussion (so we did not need a pillow fight for react vs angular,...) and the goals for the sprint. They did a great job and the result was a document:
https://docs.google.com/document/d/1_KHaA5TkvsT5a3FQ-sTRtVLvikSCm7Kn0stxdYAyXHg
With that document in mind, discussed with all sprinters that were coming and define our goals:
So we started a really nice week with a lot of grey matter, nice weather and energy! I really want to thanks Barcelona Activa and Startup Bootcamp for holding our sprint in their facilities! It's been great to have so much space and resources to work and concentrate!
I also want to thank all sprinters, because it did it! We could accomplish and overcome all the goals! The faces of all sprinters by the end of the week was joy and proud, so it's been great!
Finally and not least I want a special thanks to all Iskra/Intranetum team to help making it possible, the ones who attended the sprint (Aleix, Alex, Berta and Marc) and the ones who stayed at the office (specially Eudald)
I've been talking with all three groups and mostly involved on the backend team, so I'll try to explain some backend decisions and results.
There are two missing parts that are covered by external tools by now:
There is lots of things to work on, workflows, improve request, improve transactions on ZODB,... but it's a long term project!
plone.server, plone_client and plone.oauth are build on docker container by docker hub for each commit
After each plone.server build at docker hub a deployment is done to a sandbox cluster.
The main idea is to provide a continuous integration and continuous deployment story.
Soon an integration with plone_client, a roadmap for plone.server and tests will be included!
It all started two months ago when we where thinking about how we could organize Barcelona Sprint to be productive. We've already decided to create three teams, REST API, experimental new backend and frontend so we needed to see how a week sprint can be organized to reach some goals. At that moment I contacted Asko, Timo and Nathan to ask if they can lead each team and prepare a pre-sprint discussion (so we did not need a pillow fight for react vs angular,...) and the goals for the sprint. They did a great job and the result was a document:
https://docs.google.com/document/d/1_KHaA5TkvsT5a3FQ-sTRtVLvikSCm7Kn0stxdYAyXHg
With that document in mind, discussed with all sprinters that were coming and define our goals:
- API : reach a stable state so we can start building front and back with it.
- BACKEND : play, experiment and see if its possible to create our own backend for plone that is API centric and async.
- FRONTEND : have a prototype that solves most of the problems of creating a JS application powerful, customizable and on top of a content management API
So we started a really nice week with a lot of grey matter, nice weather and energy! I really want to thanks Barcelona Activa and Startup Bootcamp for holding our sprint in their facilities! It's been great to have so much space and resources to work and concentrate!
I also want to thank all sprinters, because it did it! We could accomplish and overcome all the goals! The faces of all sprinters by the end of the week was joy and proud, so it's been great!
Finally and not least I want a special thanks to all Iskra/Intranetum team to help making it possible, the ones who attended the sprint (Aleix, Alex, Berta and Marc) and the ones who stayed at the office (specially Eudald)
I've been talking with all three groups and mostly involved on the backend team, so I'll try to explain some backend decisions and results.
Backend
Right now plone.server package on github.com/plone/plone.server is a WIP backend that has:- ZTK security system to provide Permissions and Roles
- Annotation local roles on content objects
- Multisite system with DX Site object
- Plone.registry on top of site objects to hold all configuration
- Dexterity Content types (without CMF) and fti
- Credentials extraction and user factory customizable engine
- Aiohttp HTTP server
- Python 3.5+ support
- All the app works on a asyncio loop
- A new traversal with content and language negotiation based on API definition
- Extensible API definition on a json file
- Basic permission checkers
- Frame inspection based global request operation
- Websockets basic implementation
- Basic transaction support for asyncio
- Async parallel utilities to run parallel process
- Serialization of zope Schema to json
- plone.example package with dummy content type
There are two missing parts that are covered by external tools by now:
- Catalog: we decided to not catalog objects on the ZODB (at least right now) so we provide a transaction aware elasticsearch indexing functionality
- User DB and JWT generation: Iskra open sourced a custom OAuth server in python/redis/ldap (plone.oauth) aware about groups/roles that can provide global roles and JWT generation, so we integrated it on the plone.server (at least right now)
There is lots of things to work on, workflows, improve request, improve transactions on ZODB,... but it's a long term project!
Opinion on some concerns
About MVCC, we can maintain it on the new core with three different approaches (thanks Asko!)- For websockets create a single connection for each one and delegate to the client the commits.
- For API requests (PUT/POST/DELETE/PATCH) do a db.open to create a connection object for each request.
- For async utility that are not request aware create a connection for each one, with a non request aware DB object.
It's clear that lots of websockets, utilities and requests will mean memory depending on the connection cache size.
This approach is still not implemented, we are discussing about different approaches.
About elasticsearch cataloging strategy its triggered on commit success, so search functionality will not be able before commit. That is a change from the actual stack and my opinion is that we are abusing about catalog searches on navigation and rendering. As there is no templates my opinion is that its possible to deal with BTree navigation.
CI/CD
plone.server, plone_client and plone.oauth are tested on travis-ci with the needed backend services to avoid mocking them using docker-compose.plone.server, plone_client and plone.oauth are build on docker container by docker hub for each commit
After each plone.server build at docker hub a deployment is done to a sandbox cluster.
The main idea is to provide a continuous integration and continuous deployment story.
Try sandbox:
git clone https://github.com/pyrenees/docker.git
cd docker
python get_token.py http://130.211.51.51:32144
# Choose any RAW token
# Call with you HTTP tool to
GET http://130.211.51.51:32535/plone/
AUTHORIZATION: bearer ****RAWTOKEN****
ACCEPT: application/json
Soon an integration with plone_client, a roadmap for plone.server and tests will be included!
Perfomance
After deploying the application I've run a small read performance test to compare plone.server agains actual stack (plone 5 with plone.restapi):
Operation : GET /dexterity_object with authentication
Result : The same on both operations
plone.server is 241% faster right now from 10 - 100 concurrent users
plone.server is 342% faster right now on 700 concurrent users