Unable to login as `ubuntu` user on ec2 instance spawned from auto scaling group



  • Utilizing Ansible AWS modules, I'm creating an AMI from an existing EC2 instance where I am able to ssh with both my user and default account (ubuntu). After the AMI is in a ready state, I then create a launch template with the new AMI and an autoscaling group that leverages that launch template. Once the instance from the autoscaling group is stood up, I am only able to ssh with the user account, but not the default name. The key_name used for the first instance and the launch template are identical. The /etc/ssh/sshd_config file is also identical between the first instance and the autoscaled instance. The two instances also use the same security groups with port 22 accepting ssh traffic. I assume there might be some data lost during the AMI creation event, but I'm not sure. Any and all help would be appreciated and I'd be happy to provide more information if needed. Thank you!

    - name: Create a sandbox instance
      hosts: localhost
      become: False
      gather_facts: False
      tasks:
        - name: Launch instance
          ec2_instance:
            key_name: "{{ keypair }}"
            security_group: "{{ security_group }}"
            instance_type: "{{ instance_type }}"
            image_id: "{{ image }}"
            wait: true
            region: "{{ region }}"
            vpc_subnet_id: "{{ vpc_subnet_id }}"
            volumes:
              - device_name: /dev/sda1
                ebs:
                  volume_size: 50
                  delete_on_termination: true
            network:
              assign_public_ip: true
            tags:
              tmp: instance
          register: ec2
    
    - name: Debug EC2 variable availability
      ansible.builtin.debug:
        msg: ec2 instance {{ ec2.instances[0].network_interfaces[0].private_ip_address }}
    
    - name: Add new instance to host group
      add_host:
        hostname: "{{ ec2.instances[0].network_interfaces[0].private_ip_address }}"
        groupname: launched
    
    - name: Wait for SSH to come up
      delegate_to: "{{ ec2.instances[0].network_interfaces[0].private_ip_address }}"
      remote_user: "{{ bootstrap_user }}"
      wait_for_connection:
        delay: 60
        timeout: 320
    
    • name: Configure instance
      hosts: launched
      become: True
      gather_facts: True
      remote_user: "{{ bootstrap_user }}"
      roles:

      • app_server
      • ruby
      • nginx
      • Datadog.datadog
        vars:
        datadog_checks:
        sidekiq:
        logs:
        - type: file
        path: /var/log/sidekiq.log
        source: sidekiq
        service: sidekiq
        tags:
        - "env:{{rails_env}}"
    • hosts: launched
      become: yes
      gather_facts: no
      remote_user: "{{ user_name }}"
      become_user: "{{ user_name }}"

      Need to set this hostname appropriately

      pre_tasks:

      • name: set hostname
        set_fact: hostname="sidekiq"
        roles:
      • deploy_app
    • hosts: launched
      become: yes
      gather_facts: no
      remote_user: "{{ bootstrap_user }}"
      roles:

      • sidekiq
    • name: Generate AMI from newly generated EC2 instance
      hosts: localhost
      gather_facts: False
      pre_tasks:

      • set_fact: ami_date="{{lookup('pipe','date +%Y%m%d%H%M%S')}}"
        tasks:

      • name: Debug EC2 instance variable availability
        ansible.builtin.debug:
        msg: EC2 Instances {{ ec2.instances }}

      • name: Create AMI
        ec2_ami:
        instance_id: "{{ ec2.instances[0].instance_id }}"
        name: "sidekiq_ami_{{ ami_date }}"
        device_mapping:
        - device_name: /dev/sda1
        size: 200
        delete_on_termination: true
        volume_type: gp3
        wait: True
        tags:
        env: "{{ rails_env }}"
        register: ami

      - name: Terminate instances that were previously launched

      ec2:

      state: "absent"

      instance_ids: "{{ ec2.instances[0].instance_id }}"

      region:

      • name: Debug AMI variable availability
        ansible.builtin.debug:
        msg: AMI {{ ami }}

      • name: Create an ec2 launch template from new Sidekiq AMI
        ec2_launch_template:
        template_name: "sidekiq_launch_template_{{ rails_env }}"
        image_id: "{{ ami.image_id }}"
        key_name: "{{ keypair }}"
        instance_type: "{{ instance_type }}"
        disable_api_termination: true
        block_device_mappings:
        - device_name: /dev/sda1
        ebs:
        volume_size: 200
        volume_type: gp3
        delete_on_termination: true
        network_interfaces:
        - device_index: 0
        associate_public_ip_address: yes
        subnet_id: "{{ subnet_id }}"
        groups: ["{{ security_group }}"]
        user_data: "{{ '#!/bin/bash\nsudo systemctl sidekiq.service restart' | b64encode }}"
        register: template

      Rolling ASG update with new launch template

      • name: Rolling update of the existing EC2 instances
        ec2_asg:
        name: "sidekiq_autoscaling_group_{{ rails_env }}"
        availability_zones:
        - us-west-1a
        launch_template:
        launch_template_name: "sidekiq_launch_template_{{ rails_env }}"
        health_check_period: 60
        health_check_type: EC2
        replace_all_instances: yes
        min_size: "{{ min_size }}"
        max_size: "{{ max_size }}"
        desired_capacity: "{{ desired_capacity }}"
        region: "{{ region }}"
        tags:
        - env: "{{ rails_env }}"
        Name: "{{ rails_env }}-sidekiq"
        vpc_zone_identifier: ["{{ subnet_id }}"]
        register: asg


  • Apparently during the AMI creation process, the default user password gets locked - a ! is appended to the password in /etc/shadow.

    I was able to just add passwd -u ubuntu to the user data of the launch template to unlock it and we are good to go.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2