Table of Contents
I’m currently preparing for the Cisco Live lab, an event set to take place in Las Vegas in the upcoming weeks. As with any complex task, it’s been a journey of discovery, which, on this occasion, included a rather significant pitfall on my part.
My error lay in the construction of the Dockerfiles, in conjunction with a less than robust config file. As if to add a cherry on top of this problematic sundae, recent updates to certain Python modules further complicated the situation. This confluence of factors resulted in a baffling issue: my Ansible playbooks steadfastly refused to execute.
Despite every indication to the contrary, I was repeatedly confronted with an error message stating that the DNA Center SDK was not installed. I can assure you, this was certainly not the case. It felt like being accused of a crime I didn’t commit.
In this article, I’ll take you through my missteps, the process of resolving this issue, and, most importantly, the lessons I’ve learned along the way. After all, acknowledging our mistakes and learning from them is a cornerstone of professional growth.
When the Reliable Becomes Unpredictable
At the core of this issue was my Dockerfile. This very same and simple Dockerfile had been a reliable ally during the Cisco Live session held in Amsterdam this past February, performing throughout the event.
As part of our standard practice, my team undertook the task of verifying the lab. The task at hand was straightforward: executing a simple Ansible playbook aimed at adding a site into the DNA Center. A routine procedure, we thought, but alas, it was not to be.
To our surprise, we were consistently met with an error message:
‘DNA Center Python SDK is not installed. Execute ‘pip install dnacentersdk”
It was as if we were being told to fill up a car that already had a full tank of gas. We knew the DNA Center Python SDK was installed, yet the error message insisted otherwise. This set the stage for a deep dive into the problem, a journey of troubleshooting and discovery that would ultimately lead to valuable insights.
FROM ubuntu:22.04 RUN apt-get update && \ apt-get install -y gcc python3.11 git && \ apt-get install -y python3-pip ssh && \ pip3 install --upgrade pip && \ pip3 install ansible && \ pip3 install dnacentersdk && \ pip3 install jmespath && \ pip3 install pyats[full] && \ pip3 install ansible-lint && \ ansible-galaxy collection install cisco.dnac
- hosts: dnac_servers gather_facts: false tasks: - name: Create area hierarchy cisco.dnac.site_create: site: area: name: Amsterdam parentName: "Global" type: "area" register: site_creation - name: Pause pause: seconds: 60 - name: Create building hierarchy cisco.dnac.site_create: site: building: name: CiscoLive parentName: "Global/Amsterdam" latitude: 52.377956 longitude: 4.897070 type: "building"
The surprising issue
Despite the persistent error message, a simple
pip list command confirmed our initial belief – the Python module for the DNA Center SDK was indeed installed. Further investigation, using a highly verbose Ansible playbook execution (
-vvvv), confirmed that the correct paths and Python versions were being utilized.
However, it was not until we opened Python and attempted to import the
ciscodnasdk module that the real issue began to emerge. It appeared that Ansible had been somewhat of a misleading informant, not quite revealing the full extent of the problem.
/usr/bin/python3.10 Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from dnacentersdk import api File " ", line 1 from dnacentersdk import api IndentationError: unexpected indent >>> from dnacentersdk import api Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests_toolbelt/_compat.py", line 48, in from requests.packages.urllib3.contrib import appengine as gaecontrib ImportError: cannot import name 'appengine' from 'requests.packages.urllib3.contrib' (/usr/local/lib/python3.10/dist-packages/urllib3/contrib/__init__.py) During handling of the above exception, another exception occurred: Traceback (most recent call last): File " ", line 1, in File "/usr/local/lib/python3.10/dist-packages/dnacentersdk/__init__.py", line 30, in from .api import DNACenterAPI File "/usr/local/lib/python3.10/dist-packages/dnacentersdk/api/__init__.py", line 39, in from dnacentersdk.restsession import RestSession File "/usr/local/lib/python3.10/dist-packages/dnacentersdk/restsession.py", line 44, in from requests_toolbelt.multipart import encoder File "/usr/local/lib/python3.10/dist-packages/requests_toolbelt/__init__.py", line 12, in from .adapters import SSLAdapter, SourceAddressAdapter File "/usr/local/lib/python3.10/dist-packages/requests_toolbelt/adapters/__init__.py", line 12, in from .ssl import SSLAdapter File "/usr/local/lib/python3.10/dist-packages/requests_toolbelt/adapters/ssl.py", line 16, in from .._compat import poolmanager File "/usr/local/lib/python3.10/dist-packages/requests_toolbelt/_compat.py", line 50, in from urllib3.contrib import appengine as gaecontrib ImportError: cannot import name 'appengine' from 'urllib3.contrib' (/usr/local/lib/python3.10/dist-packages/urllib3/contrib/__init__.py)
As highlighted in line 32 something was wrong with the requests module. The cause is that the latest version of
requests does not support urllib3 2.0.0which has been used by the dnacentersdk.
By using an older version of the urllib3 and requests-toolbelt module it could be fixed.
Apply the fix and reevaluation of the Dockerfile
With the root of the issue pinpointed, the path towards resolution was clear. The following Dockerfile exemplifies my solution, featuring older versions of
requests-toolbelt that were compatible with each other and the
dnacentersdk. Furthermore, I made sure to specify the precise versions of
dnacentersdk and the
galaxy collection, as these also needed to be in harmony for successful execution.
Please follow the link below to see a table of compatible versions for the DNA Center Ansible playbooks with the dnacentersdk:
FROM ubuntu:22.04 RUN apt-get update && \ apt-get install -y apt-utils && \ apt-get install -y gcc git && \ apt-get install -y python3-venv python3-pip ssh RUN python3 -m venv /root/ansible RUN . /root/ansible/bin/activate && \ pip install --upgrade pip && \ pip install requests-toolbelt==0.10.1 && \ pip install urllib3==1.26.15 && \ pip install ansible && \ pip install dnacentersdk==2.5.5 && \ pip install jmespath && \ pip install pyats[full] && \ pip install ansible-lint && \ ansible-galaxy collection install cisco.dnac:6.6.4
Reflecting on this journey, three fundamental principles stand out as critical lessons learned.
First, the importance of specifying exact versions when setting up a Docker container cannot be overstated. Docker containers are designed to provide a consistent environment, which is crucial for ensuring that applications run reliably when moved from one computing environment to another. However, the reproducibility and consistency of Docker containers can be compromised when we fail to specify exact versions of the software being installed. As our experience with the
urllib3 versions demonstrated, even minor differences in versions can lead to unforeseen conflicts and compatibility issues.
Secondly, the value of using multiple RUN commands in Dockerfiles which I also introduced in the new version of the Dockerfile.
This approach provides several benefits: it improves readability, makes Dockerfiles easier to understand, and facilitates layering, which can optimize build times and efficient use of storage. Furthermore, structuring Dockerfiles with multiple RUN commands can help in debugging, as it allows for a more granular understanding of where problems might arise, as we experienced first-hand in our troubleshooting journey.
Lastly, this experience emphasized the benefits of working with virtual environments. Virtual environments provide an isolated context, in which you can install specific versions of packages, without them interfering with other projects or system-wide installations. They offer an additional layer of safety, preventing package conflicts and ensuring a clean workspace for each of your projects.
In conclusion, Docker provides a powerful platform for consistent, reproducible application deployment, but as with any powerful tool, it requires careful handling. Attention to version specificity, the strategic use of multiple RUN commands in Dockerfiles, and the utilization of virtual environments are practices that contribute significantly to ensuring that Docker delivers on its promise of “Build, Ship, and Run Any App, Anywhere.”