Rebuilding LWT’s API, part 2

Hi there! Much work has been done and the new LWT REST API is now ready!

For records, the initial suggestion, formatted with the help of ChatGPT, looked like the following (see part 1 for details):


1. GET API Endpoints:
   - Get API Version: `GET /api/version`
   - Get Next Word to Test: `GET /api/test/next-word`
   - Get Tomorrow's Tests Number: `GET /api/test/tomorrow`
   - Get Phonetic Reading: `GET /api/text/phonetic-reading`
   - Get Theming Path: `GET /api/text/theme-path`
   - Get Texts Statistics: `GET /api/text/statistics`
   - Get Media Paths: `GET /api/media/paths`
   - Get Example Sentences: `GET /api/sentences/{word}`
   - Get Imported Terms: `GET /api/terms/imported`
   - Get Similar Terms: `GET /api/terms/{term}/similar`
   - Get Term Translations: `GET /api/terms/{term}/translations`

2. POST API Endpoints:
   - Update Reading Position: `POST /api/reading/position`
   - Add/Update Translation: `POST /api/translation/{word}`
   - Increment/Decrement Term Status: `POST /api/terms/{term}/status`
   - Set Term Status: `POST /api/terms/{term}/status/set`
   - Test Regular Expression: `POST /api/regexp/test`
   - Set Term Annotation: `POST /api/terms/{term}/annotation`
   - Save Setting: `POST /api/settings`

It featured 11 endpoints on GET and 7 on POST. The key of this approach was it was not a fundamental change but rather a reorganization of the already existing AJAX requests. In fact, GPT do not have access to all the app information, as most endpoints are simply a simplification of the explanation sentence, and were sometimes optimistic or simply wrong: for instance GET /api/text/statistics, how are we supposed to know that we get statistics for a subset of texts here?

Finally, the chosen implementation is as follows:


1. GET API Endpoints:
   - Get Files Paths in Media folder: `GET /media-files`
   - Get Phonetic Reading: `GET /phonetic-reading`
   - Get Next Word to Review: `GET /review/next-word`
   - Get Tomorrow's Reviews Number: `GET /review/tomorrow-count`
   - Get Sentences containing Any Term: `GET /sentences-with-term`
   - Get Sentences containing Registred Term: `GET /sentences-with-term/{term-id}`
   - Get CSS Theme Path: `GET /settings/theme-path`
   - Get Terms similar to Another One: `GET /similar-terms`
   - Get Term Translations: `GET /terms/{term-id}/translations`
   - Get Imported Terms: `GET /terms/imported`
   - Get Texts Statistics: `GET /texts/statistics`
   - Get API Version: `GET /version`

2. POST API Endpoints:
   - Save Setting: `POST /settings`
   - Decrement Term Status: `POST /terms/{term-id}/status/down`
   - Increment Term Status: `POST /terms/{term-id}/status/up`
   - Set Term Status: `POST /terms/{term-id}/status/{new-status}`
   - Update Term Translation: `POST /terms/{term-id}/translations`
   - Create a New Term With its Translation: `POST /terms/new`
   - Set Text Annotation: `POST /texts/{text-id}/annotation`
   - Update Audio Position: `POST /texts/{text-id}/audio-position`
   - Update Reading Position: `POST /texts/{text-id}/reading-position`

We have 12 enpoints on GET and 9 on POST. Apart from the necessary corrections since GPT’s output, the new system clearly specifies when the server can use previous data or has to build it from scratch.

A previous “GPT” endpoint was GET /api/sentences/{word}, which would require a word to be passed as a text. Then the server would search all sentences containing this specific word, parsing and adapting it, which is expensive. When it is a new word, it is still done through GET /sentences-with-term, but for already registered words we use the word ID with GET /sentences-with-term/{term-id}, which is naturally translates as a SQL projection.

Another feat of this implementation is to remove the need to pass sensitive data such as SQL. When conducting a word review (test), only the server has knowledge of which subset of words should be tested, and shows one of these word to the user. Hence, the subset selection was stored as a part of the page URL. When a user would require the next word to test it would trigger a page refresh. Now, the URL only stores the ID of the language, text or words to test, as well as a unique ID for the test type (language, text or word). In this way SQL injection is prevented.

At the general scale, queries are now smaller and faster, as they support caching. The new architecture also allows to build tests with NPM, Chai and SuperTest, with 100% code coverage on GET.

Aftermath

The API is well integrated into LWT, the requests are smaller, the app faster and more secure. The API was released with LWT 2.9.0. Apart from a small fix in 2.9.1, the API had received no more fix (as of this time), proving its robustness. This RESTful system will now serve as a base to build a more dynamic LWT, paving the way to the future of the app.

As a first try building a RESTful API, I consider it a great success, and a great initiation to the art. Happy language learning!


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *