0

Bit of an odd one, so bear with me.

I am building an Azure VM image using Packer. Part of the install requires some Python libraries to be installed. I can execute this via a Shell script in the Packer run list, however when the image is sealed and ready to be used those libraries are not installed.

I have found that the libraries aren't installed because it's installed to a user profile, and part of the Packer process is to delete said user profile upon 'resealing' the image for use.

I have Googled a number of ways, but I have't found anything that allows me to run as script on login for a user that doesn't exist yet, as when we first use the image and log in, the account is created.

Does anyone know how I could run this script on logon on first use?

#!/bin/bash

cd /


echo "Testing to make sure that script performed as expected, and basic scenarios work"
for cmd in python pip; do
    if ! command -v $cmd; then
        echo "$cmd was not installed or not found on PATH"
        exit 1
    fi
done


list1=(
    pandevice
    pan-python
    requests
    requests_toolbelt
    requests[security]
)

for i in "${list1[@]}"; do
    pip install $i
done


list2=(
    asn1crypto
    certifi
    cffi
    chardet
    cryptography
    enum34
    idna
    ipaddress
    pan-python
    pandevice
    pycparser
    pyOpenSSL
    requests
    requests-toolbelt
    six
    urllib3
)

for x in "${list2[@]}"; do
    if ! pip freeze | grep $x; then
        echo "$x was not installed or not found on PATH"
        exit 1
    fi
done

Just so you know, I have thought about using sudo pip install, however; What are the risks of running 'sudo pip'?

Beefcake
  • 733
  • 2
  • 12
  • 37
  • coming back to this due to the recent reply. PIP libraries are installed per user, and since Packer blasts the users local profile, they go too. so they are installed after the VM is built with the image – Beefcake Mar 19 '20 at 11:21

1 Answers1

0

I am curious how your packer file looks like. I have run some scripts using root as follows.

packer file example:

{
    "variables": {
        "image_class": "centos7-base",
        "build_number": ""
    },
    "builders": [
        {
            "type": "googlecompute",
            "source_image_family": "{{ user `google_base_image_family` }}",
            "account_file": "{{ user `gce_service_account` }}",
            "image_family": "{{ user `_image_family` }}-{{ user `image_class` }}",
            "image_name": "wp-{{ user `image_class` }}-b{{ user `build_number` }}-{{ timestamp }}",
            "project_id": "{{ user `gce_project_id` }}",
            "ssh_username": "{{ user `ssh_username` }}",
            "subnetwork": "{{ user `gce_subnetwork` }}",
            "network": "{{ user `gce_network` }}",
            "zone": "{{ user `gce_zone` }}",
            "omit_external_ip": "true",
            "use_internal_ip": "true",
            "disk_size": 20

        }
    ],
    "provisioners": [
        {
            "type": "shell",
            "only": ["googlecompute"],
            "script": "base-image.sh",
            "skip_clean": true,
            "execute_command": "sudo chmod +x {{ .Path }}; sudo {{ .Vars }} {{ .Path }}"
        }
    ],
    "post-processors": [
    ]
}

This file is used for google cloud. The important section is provisioner: shell

"provisioners": [
        {
            "type": "shell",
            "only": ["googlecompute"],
            "script": "{{ user `provisioner_root` }}/shell/base-image.sh",
            "skip_clean": true,
            "execute_command": "sudo chmod +x {{ .Path }}; sudo {{ .Vars }} {{ .Path }}"

The install that happens in base_image is persistent. In general, I don't think it is a good idea of doing installs on running servers. That goes against the idea of immutable servers.