Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scetron docs odds and ends #266

Merged
merged 4 commits into from
Nov 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions changes/266.housekeeping
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Fixed typos in function names, comments, and errors.
Updated documentation to include an example CSV for bulk onboarding.
Updated documentation to make yaml override placement and git repository more clear.
Added a homepage to the app config.
2 changes: 1 addition & 1 deletion docs/dev/arch_decision.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@ The intention is to document deviations from a standard Model View Controller (M

## Handling the Nornir Inventory

In order for Nornir to function an inventory is created. There are multiple supported inventory sources that fit many needs; however there is a unique requirement that this plugin is trying to solve. The problem is specifically around the first SSoT job (Sync Devices from Network); how can we create an inventory when there is no source "yet"? Our solution to this problem is to generate a empty inventory, and then process the ip addresses from the job form to create a inventory in an on demand fashion and inject the credentials into the inventory based on the secrets group selected.
In order for Nornir to function, an inventory is created. There are multiple supported inventory sources that fit many needs; however there is a unique requirement that this plugin is trying to solve. The problem is specifically around the first SSoT job (Sync Devices from Network); how can we create an inventory when there is no source "yet"? Our solution to this problem is to generate a empty inventory, and then process the ip addresses from the job form to create a inventory in an on demand fashion and inject the credentials into the inventory based on the secrets group selected.

For the general application constraint for this ADR see the [Credentials Section](../user/app_getting_started.md#device-credentials-functionality).
2 changes: 1 addition & 1 deletion docs/dev/extending.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Please see the dedicated FAQ for [device onboarding extensions](onboarding_exten

## Extending SSoT jobs (Sync Devices From Network, and Sync Network Data From Network)

Extending the platform support for the SSoT specific jobs should be accomplished with adding a yaml file that defines commands, jdiff jmespaths, and post_processors. A PR into this library is welcomed, but this app exposes the Nautobot core datasource capabilities to be able to load in overrides from a Git repository.
Extending the platform support for the SSoT specific jobs should be accomplished with adding a yaml file that defines commands, jdiff, jmespaths, and post_processors. A PR into this library is welcomed, but this app exposes the Nautobot core datasource capabilities to be able to load in overrides from a Git repository.

### Adding Platform/OS Support

Expand Down
16 changes: 10 additions & 6 deletions docs/user/app_use_cases.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This document describes common use-cases and scenarios for this App utilizing th
This App can be used in three general ways.

1. Onboard a device with basic information. (Name, Serial, Device Type, Management IP + Interface)
2. Take existing devices and enhace the data for each device by syncing in more metadata. (Interface, VLANs, VRFs, Cabling, etc.)
2. Take existing devices and enhance the data for each device by syncing in more metadata. (Interface, VLANs, VRFs, Cabling, etc.)
3. Both 1 and 2 in conjunction with each other.

### Preparation
Expand Down Expand Up @@ -76,6 +76,12 @@ During a successful onboarding process, a new device will be created in Nautobot

This SSoT job supports a bulk CSV execution option to speed up this process.

### Example CSV
```
ip_address_host,location_name,device_role_name,namespace,device_status_name,interface_status_name,ip_address_status_name,secrets_group_name,platform_name,set_mgmt_only,update_devices_without_primary_ip,
192.168.1.1,"Test Site",Onboarding,Global,Active,Active,Active,"test secret group",,False,True
```

### Consult the Status of the Sync Network Devices SSoT Job

The status of onboarding jobs can be viewed via the UI (Jobs > Job Results) or retrieved via API (`/api/extras/job-results/`) with each process corresponding to an individual Job-Result object.
Expand All @@ -84,8 +90,7 @@ The status of onboarding jobs can be viewed via the UI (Jobs > Job Results) or r

To run the SSoT Sync Devices Job via the api:


Post to `/api/extras/jobs/SSOTSyncDevices/run/` with the relevent onboarding data:
Post to `/api/extras/jobs/SSOTSyncDevices/run/` with the relevant onboarding data:

```bash
curl -X "POST" <nautobot URL>/api/extras/jobs/SSOTSyncDevices/run/ -H "Content-Type: application/json" -H "Authorization: Token $NAUTOBOT_TOKEN" -d '{"data": {"location": "<valid location UUID>", "ip_address": "<reachable IP to onboard>", "port": 22, "timeout": 30}}
Expand All @@ -108,7 +113,7 @@ Optional Fields:

### Enhance Existing Device

A existing devices data can be expanded to include additional objects by:
An existing device's data can be expanded to include additional objects by:

- A SSoT job execution.
- Via Jobs menu
Expand All @@ -128,8 +133,7 @@ The status of onboarding jobs can be viewed via the UI (Jobs > Job Results) or r

To run the SSoT Sync Network Data Job via the api:


Post to `/api/extras/jobs/SSOTSyncNetworkData/run/` with the relevent onboarding data:
Post to `/api/extras/jobs/SSOTSyncNetworkData/run/` with the relevant onboarding data:

```bash
curl -X "POST" <nautobot URL>/api/extras/jobs/SSOTSyncNetworkData/run/ -H "Content-Type: application/json" -H "Authorization: Token $NAUTOBOT_TOKEN" -d '{"data": {"devices": "<valid devices UUID>"}
Expand Down
69 changes: 41 additions & 28 deletions docs/user/app_yaml_overrides.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,46 @@
# Extending and Overriding Platform YAML Files

One element of the new SSoT based jobs this app exposes; is the attempt to create a framework that allows the definition of each platforms dependencies in a YAML format.
One element of the new SSoT based jobs this app exposes is the attempt to create a framework that allows the definition of each platforms dependencies in a YAML format.

This App provides sane defaults that have been tested, the command mapper files are located in the source code under `command_mappers`. There is potential for these sane defaults to not work in a given environment; alternatively you may want to add additional platform support in your deployment. These are the two main use cases to utilize the datasource feature this app exposes.

!!! info
To avoid overly complicating the merge logic, the App will always prefer the platform specific YAML file loaded in from the git repository.

!!! warn
Partial YAML file merging is not supported. Meaning you can't only overload `sync_devices` definition and inherit `sync_network_data` definition.


## File Name
The YAML file names must be named `<network_driver>.yml`. Where network_driver must exist in the netutils mapping exposed from Nautobot core.

## File Placement
The override files can either be placed directly into the python plugin command mappers directory (by default: `/opt/nautobot/lib64/python<python version>/site-packages/nautobot_device_onboarding/command_mappers/`) or by using a Git Data Source.

### Git Data Source

File structure:
```bash
.
├── README.md
└── onboarding_command_mappers
└── <network_driver>.yml
```

When loading from a Git Repository this App is expecting a root directory called `onboarding_command_mappers`. Each of the platform YAML files are then located in this directory. The YAML file names must be named `<network_driver>.yml`. Where network_driver must exist in the netutils mapping exposed from Nautobot core. If your platform does not appear in the netutils mapping, you can override or add your platform via the admin > config panel.

To quickly get a list of network driver mappings in core, run:

```python
from nautobot.dcim.utils import get_all_network_driver_mappings

sorted(list(get_all_network_driver_mappings().keys()))
```

### Setting up the Git Repository

1. Extensibility -> Git Repositories
2. Create a new repository, most importantly selecting the `Provides` of `Network Sync Job Command Mappers`

## File Format
There are only a few components to the file and they're described below:
Expand Down Expand Up @@ -41,30 +81,3 @@ sync_devices:
post_processor: "{{ obj[0] | upper }}"
..omitted..
```

## Using Datasource to Override

This App provides sane defaults that have been tested, the files are located in the source code under `command_mappers`. There is potential for these sane defaults to not work in a given environment; alternatively you may want to add additional platform support in your deployment. These are the two main use cases to utilize the datasource feature this app exposes.

!!! info
To avoid overly complicating the merge logic, the App will always prefer the platform specific YAML file loaded in from the git repository.

!!! warn
Partial YAML file merging is not supported. Meaning you can't only overload `sync_devices` definition and inherit `sync_network_data` definition.

### Properly Formatting Git Repository

When loading from a Git Repository this App is expecting a root directory called `onboarding_command_mappers`. Each of the platform YAML files are then located in this directory. The YAML file names must be named `<network_driver>.yml`. Where network_driver must exist in the netutils mapping exposed from Nautobot core.

To quickly get a list run:

```python
from nautobot.dcim.utils import get_all_network_driver_mappings

sorted(list(get_all_network_driver_mappings().keys()))
```

### Setting up the Git Repository

1. Extensibility -> Git Repositories
2. Create a new repository, most importantly selecting the `Provides` of `Network Sync Job Command Mappers`
1 change: 1 addition & 0 deletions nautobot_device_onboarding/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ class NautobotDeviceOnboardingConfig(NautobotAppConfig):
}
caching_config = {}
docs_view_name = "plugins:nautobot_device_onboarding:docs"
home_view_name = "extras:job_list" # Jobs only for now. May change in the future.


config = NautobotDeviceOnboardingConfig # pylint:disable=invalid-name
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,9 @@
from nautobot.dcim.models import Device, DeviceType, Manufacturer, Platform

from nautobot_device_onboarding.diffsync.models import sync_devices_models
from nautobot_device_onboarding.nornir_plays.command_getter import sync_devices_command_getter
from nautobot_device_onboarding.nornir_plays.command_getter import (
sync_devices_command_getter,
)
from nautobot_device_onboarding.utils import diffsync_utils

ParameterSet = FrozenSet[Tuple[str, Hashable]]
Expand Down Expand Up @@ -76,8 +78,8 @@ def load_platforms(self):
adapter=self,
pk=platform.pk,
name=platform.name,
network_driver=platform.network_driver if platform.network_driver else "",
manufacturer__name=platform.manufacturer.name if platform.manufacturer else None,
network_driver=(platform.network_driver if platform.network_driver else ""),
manufacturer__name=(platform.manufacturer.name if platform.manufacturer else None),
)
self.add(onboarding_platform)
if self.job.debug:
Expand Down Expand Up @@ -125,12 +127,12 @@ def load_devices(self):
name=device.name,
platform__name=device.platform.name if device.platform else "",
primary_ip4__host=device.primary_ip4.host if device.primary_ip4 else "",
primary_ip4__status__name=device.primary_ip4.status.name if device.primary_ip4 else "",
primary_ip4__status__name=(device.primary_ip4.status.name if device.primary_ip4 else ""),
role__name=device.role.name,
status__name=device.status.name,
secrets_group__name=device.secrets_group.name if device.secrets_group else "",
secrets_group__name=(device.secrets_group.name if device.secrets_group else ""),
interfaces=interfaces,
mask_length=device.primary_ip4.mask_length if device.primary_ip4 else None,
mask_length=(device.primary_ip4.mask_length if device.primary_ip4 else None),
serial=device.serial,
)
self.add(onboarding_device)
Expand Down Expand Up @@ -206,21 +208,23 @@ def execute_command_getter(self):
raise Exception("Platform.network_driver missing") # pylint: disable=broad-exception-raised

result = sync_devices_command_getter(
self.job.job_result, self.job.logger.getEffectiveLevel(), self.job.job_result.task_kwargs
self.job.job_result,
self.job.logger.getEffectiveLevel(),
self.job.job_result.task_kwargs,
)
if self.job.debug:
self.job.logger.debug(f"Command Getter Result: {result}")
data_type_check = diffsync_utils.check_data_type(result)
if self.job.debug:
self.job.logger.debug(f"CommandGetter data type check resut: {data_type_check}")
self.job.logger.debug(f"CommandGetter data type check result: {data_type_check}")
if data_type_check:
self._handle_failed_devices(device_data=result)
else:
self.job.logger.error(
"Data returned from CommandGetter is not the correct type. "
"No devices will be onboarded, check the CommandGetter job logs."
)
raise ValidationError("Unexpected data returend from CommandGetter.")
raise ValidationError("Unexpected data returned from CommandGetter.")

def _add_ip_address_to_failed_list(self, ip_address):
"""If an a model fails to load, add the ip address to the failed list for logging."""
Expand Down Expand Up @@ -297,7 +301,13 @@ def load_device_types(self):
def _fields_missing_data(self, device_data, ip_address, platform):
"""Verify that all of the fields returned from a device actually contain data."""
fields_missing_data = []
required_fields_from_device = ["device_type", "hostname", "mgmt_interface", "mask_length", "serial"]
required_fields_from_device = [
"device_type",
"hostname",
"mgmt_interface",
"mask_length",
"serial",
]
if platform: # platform is only returned with device data if not provided on the job form/csv
required_fields_from_device.append("platform")
for field in required_fields_from_device:
Expand All @@ -311,7 +321,7 @@ def load_devices(self):
for ip_address in self.device_data:
if self.job.debug:
self.job.logger.debug(f"loading device data for {ip_address}")
platform = None # If an excption is caught below, the platform must still be set.
platform = None # If an exception is caught below, the platform must still be set.
onboarding_device = None
try:
location = diffsync_utils.retrieve_submitted_value(
Expand All @@ -321,7 +331,9 @@ def load_devices(self):
job=self.job, ip_address=ip_address, query_string="platform"
)
primary_ip4__status = diffsync_utils.retrieve_submitted_value(
job=self.job, ip_address=ip_address, query_string="ip_address_status"
job=self.job,
ip_address=ip_address,
query_string="ip_address_status",
)
device_role = diffsync_utils.retrieve_submitted_value(
job=self.job, ip_address=ip_address, query_string="device_role"
Expand All @@ -338,7 +350,7 @@ def load_devices(self):
device_type__model=self.device_data[ip_address]["device_type"],
location__name=location.name,
name=self.device_data[ip_address]["hostname"],
platform__name=platform.name if platform else self.device_data[ip_address]["platform"],
platform__name=(platform.name if platform else self.device_data[ip_address]["platform"]),
primary_ip4__host=ip_address,
primary_ip4__status__name=primary_ip4__status.name,
role__name=device_role.name,
Expand All @@ -365,7 +377,7 @@ def load_devices(self):
if fields_missing_data:
onboarding_device = None
self.job.logger.error(
f"Unable to onbaord {ip_address}, returned data missing for {fields_missing_data}"
f"Unable to onboard {ip_address}, returned data missing for {fields_missing_data}"
)
else:
if onboarding_device:
Expand Down
12 changes: 6 additions & 6 deletions nautobot_device_onboarding/jobs.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ def run(self, *args, **data):
self.credentials = data["credentials"]

self.logger.info("START: onboarding devices")
# allows for itteration without having to spawn multiple jobs
# allows for iteration without having to spawn multiple jobs
# Later refactor to use nautobot-plugin-nornir
for address in data["ip_address"].replace(" ", "").split(","):
try:
Expand All @@ -132,7 +132,7 @@ def run(self, *args, **data):
)
if not data["continue_on_failure"]:
raise OnboardException(
"fail-general - An exception occured and continue on failure was disabled."
"fail-general - An exception occurred and continue on failure was disabled."
) from err

def _onboard(self, address):
Expand Down Expand Up @@ -306,7 +306,7 @@ def load_target_adapter(self):
self.target_adapter = SyncDevicesNautobotAdapter(job=self, sync=self.sync)
self.target_adapter.load()

def _convert_sring_to_bool(self, string, header):
def _convert_string_to_bool(self, string, header):
"""Given a string of 'true' or 'false' convert to bool."""
if string.lower() == "true":
return True
Expand Down Expand Up @@ -369,10 +369,10 @@ def _process_csv_data(self, csv_file):
name=row["platform_name"].strip(),
)

set_mgmgt_only = self._convert_sring_to_bool(
set_mgmt_only = self._convert_string_to_bool(
string=row["set_mgmt_only"].lower().strip(), header="set_mgmt_only"
)
update_devices_without_primary_ip = self._convert_sring_to_bool(
update_devices_without_primary_ip = self._convert_string_to_bool(
string=row["update_devices_without_primary_ip"].lower().strip(),
header="update_devices_without_primary_ip",
)
Expand All @@ -382,7 +382,7 @@ def _process_csv_data(self, csv_file):
processed_csv_data[row["ip_address_host"]]["namespace"] = namespace
processed_csv_data[row["ip_address_host"]]["port"] = int(row["port"].strip())
processed_csv_data[row["ip_address_host"]]["timeout"] = int(row["timeout"].strip())
processed_csv_data[row["ip_address_host"]]["set_mgmt_only"] = set_mgmgt_only
processed_csv_data[row["ip_address_host"]]["set_mgmt_only"] = set_mgmt_only
processed_csv_data[row["ip_address_host"]]["update_devices_without_primary_ip"] = (
update_devices_without_primary_ip
)
Expand Down