Akeneo to Microsoft Dynamics NAV (ERP) Connector
In large B2B online shops, the success is determined by two factors: good products and the best possible shopping experience. Especially the shopping experience can be traced down to a winning information management. Information to be available include orders, customer and product information, and (customer-specific) prices. Using Akeneo and Microsoft Dynamics NAV (ERP) as provided involves data synchronization between the two systems. Manually handling these synchronizations can be time-consuming and it holds the potential to be buggy. This is why basecom developed Oktopus to automatically connect Microsoft Dynamics NAV (ERP) to Akeneo.
What the connector does
As the name connector suggests, Oktopus connects the two endpoints via their standard APIs. The software automates all data synchronization processes between the corresponding databases and therefore magically dissolves this time-consuming and risky manual processes.
How it generally works
Oktopus is a very powerful middleware acting as an interface between your Akeneo on the one hand and Microsoft Dynamics NAV (ERP) on the other. Using a smart combination of RabbitMQ and YAML mapping files results in a bulletproof and undefeated process reliability. Oktopus uses the standard REST-APIs of NAV and Akeneo and is loosely coupled from their cores – hence changes in their cores do not impact the usage of Oktopus.
Out of the box, the following data can be exported from Akeneo to Microsoft Dynamics NAV (ERP):
- Products (attributes and features )
- Catalogue price
- Bulk prices
- Customer-specific prices
- Stock information
- Previous orders (via other sales channels, historical orders and orders states)
- Debitor information
The export is executed automatically and synchronizes changes in Akeneo with Microsoft Dynamics NAV (ERP) according to the configured sync time.
- Microsoft Dynamics NAV (ERP): Version >= 2008 R2
- Akeneo: Version >=2.0
- Server for Oktopus: 2 cores, 4 GB RAM, 4 GB Swap, 20 GB Storage
- Docker CE (>=18.03)
- Docker-Compose (>=1.21)
- copy the file
.envand change the values regarding your needs (i.e. change the forwarded ports to ports that are available on your host)
docker/scripts/init.shto pull & build the necessary images as well as starting the docker-stack.
To work with Oktopus properly you need some systems to integrate with. For testing & development purposes you can use the provided submodules.
Checkout the desired git-submodule and follow these steps to get the integrated system running:
- copy the file
.envcontained in the submodules folder and change the values regarding your needs (i.e. change the forwarded ports to ports that are available on your host)
docker/scripts/init.shfrom the submodules-folder to pull & build the necessary images as well as starting the regarding docker-stack.
- you might need to create API credentials and add them in the regarding environment-files to make Oktopus able to connect with the systems API
This setup consists of multiple services. Some are only available when the .env-Variable
COMPOSE_ENVis set to "dev".
The following Services are included:
- build (based on ubuntu:16.04)
- cron (based on the build-image)
- supervisord (based on the build-image)
- redis (redis:latest)
- rabbitmq (based on rabbitmq:management)
- php-fpm (based on php:7.1-fpm)
- nginx (nginx:latest)
- postgres (postgres:latest)
- application-data-container (based on the build-image)
The containers marked with "based on ..." are modified for this setup. The build-image & container are made for an on-demand project-cli & contains all necessary tools for building dependencies.
The following Services are only available in "dev"-mode:
- mailcatcher (schickling/mailcatcher)
- blackfire (blackfire/blackfire)
- pgadmin (chorss/docker-pgadmin4)
- redis-commander (rediscommander/redis-commander:latest)
These services are meant for a development environment & debugging purposes.
All Docker-Compose commands must be ran throug
bin/composeto ensure all necessary environment variables are loaded & the correct configuration files are used, depending on your settings.
The most important commands are:
bin/compose up -d
You can find further & more detailed information of the docker-compose commands in the official documentation: https://docs.docker.com/compose/reference/
If you want to access the Symfony console commands or any other command-line tool like composer, this project provides the build-container.
You can launch & access an instance of this service on demand using
bin/shell. The created container will be cleaned up automatically as soon as you close your session.
The project- & cache-directories are shared between all containers, therefore all changes made to these directories will be persistent & immediately available on the necessary services.
Volumes on "dev" & "prod"
When using docker, there are two different ways of making data persistent available to multiple containers at the same time:
- "named" volume
- "bind-mount" volume
A "named" volume can be seen as an docker-internal volume. It is persistent and can be mounted to containers if wanted. These are very fast. A "bind-mount" volume on the other hand is quite slow, but the data isn't stored in docker internally. It is stored on the host-system in the preconfigured directory. But because it's the only way to make data directly available to the containers it is often used to mount project-directories & config-files.
In this project you can control when which type of volume should be used using the
I.e. the setting
APPLICATION_PROJECT_VOLUME=./applicationwould define the application-volume as a host-mount. Therefore all changes to the code would change the applications behaviour immediately & without any delay. - But the application would be slower, because all services must access the files through this slow mount.
If you're in an environment, where the code doesn't change often & performance is way more important (i.e. on a production-system) you should use a "normal"-volume. I.e. via
.env. The different volume-names can be found in the
Because changes on the hosts filesystem wouldn't be propagated into the named volumes you need to trigger a copy-command afterwards:
This command does a full copy of the host's application-directory into the containers application-directory.
But there is a way to use named-volumes on your development environment, without running a command after each change manually.
This command does a full copy and watches for changes afterwards. Every time a file is changed on the host it will be copied into the named volume.
To use this command, you need to install
fswatch. On macOS this can be done easily using brew:
brew install fswatch
[!] Keep in mind, that all changes done by the services aren't propagated to your host. Therefore logfiles and database-storage is only accessible through the regarding container.
If you're facing problems, or changes aren't synched as fast as necessary you should stick to the bind-mounts. But in most cases the "named"-volumes in combination with the
bin/sync watch-command should give you the right combination of performance & automatically propagated changes for a good development environment.