After several strategies to find it, the location (in MacOS BigSur 11.4) is:
~/Library/Logs/Xray Exploratory App/main.log
Hope that helps.
After several strategies to find it, the location (in MacOS BigSur 11.4) is:
~/Library/Logs/Xray Exploratory App/main.log
Hope that helps.
The task seems simple. If the file does not exist, create it, otherwise retain the file and its content. In both cases, apply permissions (owner
, group
, mode
).
The most obvious candidate to this task is ansible.builtin.file
with state: touch
. However, it always appears as changed as the times need to be updated. Rather, once it exists and modes are ok, it should just be ok.
The next candidate likely is ansible.builtin.copy
. If passing force: false
, it creates the file. However, it does not fix up permission on an already existing file.
It is possible to run containers as a regular user using https://podman.io/ (buzzword: https://github.com/containers/podman#rootless ).
Note that while podman is based on the same OCI standard as docker, there are some differences in the details. For example, healthchecks are specific to docker.
I assume this is what Terragrunt would help to solve. It adds a new layer where you can easily manage complex infrastructure or keep your work DRY.
I had the same issue when trying Microsofts tutorial. Adding host
under the rules
section worked for me.
Example:
spec:
ingressClassName: nginx
rules:
- host: {your_app}.northeurope.cloudapp.azure.com
http:
paths:
- path: /test(/|$)(.*)
pathType: Prefix
backend:
service:
name: mockserver-service
port:
number: 1234
Got it.
Thanks to https://stackoverflow.com/a/72530077/20003774 , all i had to do was explicitly set an appropriate logging level.
# Use this logger to forward log messages to CloudWatch Logs.
LOG = logging.getLogger(name)
LOG.setLevel(logging.INFO) # <- this line was missing
When deploying stack, log groups appears. Yay.
I'm taking TechnoWorld Kubernetes Administration class, where we learn to create a Kubernetes cluster from scratch using one Ubuntu EC2 for master node, and two Ubuntu EC2s for worker nodes. The course guides us into installing the different Kubernetes components - containerd, kubeadm, kubelet, kubectl, Weavnet etc. - on each pertinent EC2. We use NGINX as our simple application that are running on our worker node pods. I have the infrastructure set up where I can actually, for example, do a curl
from my master node and get the "Welcome to nginx" greeting as a response. So far so good. I even created a Terraform script to set this up. I even setup test Kubernetes LoadBalancer service that's accessible to the outside world, put it's IP:port
on the browser and see "Welcome to nginx" proving that the networking (VPC, subnet, security group) is correct.
Here's the traffic flow of the fundamental infrastructure we're trying to build.
Internet-facing AWS ALB(port 80) -> ingress controller LoadBalancer service(port 80:32111 mapping) -> ingress -> ClusterIP service -> EC2 worker with NGINX pod
So a user puts an ALBs DNS name on the browser and gets the "Welcome to NGINX" greeting.
However with my setup, when I put the ALB's DNS name, I get a "This site can't be reached" message.
I set up my internet-facing ALB to serve port 80, forward to a target group on port 32111 (ingress controller port) that has the 2 EC2 worker nodes registered as targets.
Any pointers on how I can debug this seemingly simple/fundamental setup?
TIA
maybe you can go in to a bit more details on the branching strategy you follow for example git flow as that will impact how you setup your CI/CD Pipelines.
For us we have one Pipeline for CD that will build the Development Branch (Git Flow) to deploy to our UAT Environment but automatically skip the Prod environment stage if the branch is called Develop* or Feature*
If the Branch starts with Release/* it will go throe the build stage, the UAT Deployment stage, The Automated Testing stage (Against UAT) and then the Production Stage, see below graphic to illustrate.
What I like on this approach is that you use the exact same code package (artifacts) on UAT and Prod, in your case that would be Dev, UAT and Prod.
In our case the CI Pipeline is a separate Pipeline and uses the same yaml templates for the build but only deploys to the Automated Test environment and is triggered by a Pull Request Policy
Oops, Don't do this. It was pointed out to me that this was wholly insecure as curl http://169.254.169.254/latest/user-data
will show you any unprivileged user the private keys. The data gets saved as /run/cloud-init/instance-data.json
There is no module to make this easier, and there is no argument under system_info
(how you add and configure the user) to ease the ability to configure the user's SSH keys. The way I went about this was adding something like this in my main.tf
to populate the variable ssh_keys_user
ssh_keys_user = {
write_files = [
{
path = "/home/ecarroll/.ssh/id_rsa"
content = file("./ssh/user/cp-terraform-user-id_rsa")
owner = "ecarroll:ecarroll"
permissions = "0600"
defer = true
},
{
path = "/home/ecarroll/.ssh/id_rsa.pub"
content = file("./ssh/user/cp-terraform-user-id_rsa.pub")
owner = "ecarroll:ecarroll"
permissions = "0644"
defer = true
},
{
path = "/home/ecarroll/.ssh/id_ecdsa"
content = file("./ssh/user/cp-terraform-user-id_ecdsa")
owner = "ecarroll:ecarroll"
permissions = "0600"
defer = true
},
{
path = "/home/ecarroll/.ssh/id_ecdsa.pub"
content = file("./ssh/user/cp-terraform-user-id_ecdsa.pub")
owner = "ecarroll:ecarroll"
permissions = "0644"
defer = true
},
{
path = "/home/ecarroll/.ssh/id_ed25519"
content = file("./ssh/user/cp-terraform-user-id_ed25519")
owner = "ecarroll:ecarroll"
permissions = "0600"
defer = true
},
{
path = "/home/ecarroll/.ssh/id_ed25519.pub"
content = file("./ssh/user/cp-terraform-user-id_ed25519.pub")
owner = "ecarroll:ecarroll"
permissions = "0644"
defer = true
}
]
}
Then what I did was wired it into my cloud-init like this,
write_files:
${ yamlencode( ssh_keys_user.write_files ) }
I generated these files with a Makefile like this,
user/cp-terraform-user-id_ecdsa:
-mkdir user 2> /dev/null;
ssh-keygen -C "User key for SSH authentication to repos" -N "" -b 521 -t ecdsa -f "$@";
touch "$@";
user/cp-terraform-user-id_ed25519:
-mkdir user 2> /dev/null;
ssh-keygen -C "User key for SSH authentication to repos" -N "" -t ed25519 -f "$@";
touch "$@";
user/cp-terraform-user-id_rsa:
-mkdir user 2> /dev/null;
ssh-keygen -C "User key for SSH authentication to repos" -N "" -b 4096 -t rsa -f "$@";
touch "$@";
This works fine. Then I just added the .pub
files to BitBucket and GitLab.
I'm not sure what the technical terms are, but whenever I've played pokemon games and successfully threw a ball to catch one, the ball always wiggled three times before clicking and confirming the catch.
Now, I'm playing Pokemon Violet and this weekend I noticed that some of the balls I throw, the animation is slightly different and it only happens to wiggle once before clicking. Of course, I much prefer this, as I've had no pokemon escape between the wiggle and the click. But I'm not sure what I was doing that caused these, I haven't been able to determine a pattern.
Is this a random thing, or can I take specific steps to get more of these one-wiggle catches?
I am having difficulty understanding some differences between an item and a block. This is what I understand so far:
Inventory | Block | Item | Understanding |
---|---|---|---|
Stone | x | In placed position, it is a cube / block | |
Dandelion | x | In placed position, it takes up an entire cube/block area and nothing else can be placed on that coordinate | |
Iron Axe | x | It can only be in the player's hand; If the right click (or the set shortcut for placing blocks) is triggered, the axe is either used in a different function or dropped out of the player's hand |
I am mostly confused about whether blocks are also considered items when it is not placed on a coordinate. For example, the block is in a chest, the block is in the player's hand, and the block is in a block with an entity (such as a cobblestone in a furnace). What exactly is considered a block and item?