tag:blogger.com,1999:blog-230766802024-03-13T19:06:10.200-04:00TΩИΨLive every day like it is your last...Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.comBlogger302125tag:blogger.com,1999:blog-23076680.post-88533979971967067712020-01-24T10:38:00.001-05:002020-01-24T10:38:32.719-05:00Ansible Template Tip: Split a StringAnsible is the simplest way to automate apps and IT infrastructure. Application Deployment + Configuration Management + Continuous Delivery. We use Ansible heavily for our AWS deployment stack.<br />
<br />
When working with Ansible template, sometimes you will need to split a string and use part of its value. For example, you have a application template which setup the application base url, like follow:<br />
<br />
<i><span style="font-family: Verdana, sans-serif;"><?xml version='1.1' encoding='UTF-8'?></span></i><br />
<i><span style="font-family: Verdana, sans-serif;"><app.model.AppLocationConfiguration></span></i><br />
<i><span style="font-family: Verdana, sans-serif;"> <adminAddress>admin@app.com</adminAddress></span></i><br />
<i><span style="font-family: Verdana, sans-serif;"> <appUrl>https://{{ app.split("-")[0] | lower }}-{{ env | lower }}.app.com/</appUrl></span></i><br />
<i><span style="font-family: Verdana, sans-serif;"></app.model.AppLocationConfiguration></span></i><br />
<br />
We have a variable "app" containing the string "asia-101.test.org" and want to extract the geolocation part of this string, we could just use "{{ app.split("-")[0] | lower }}"<br />
<br />
The constructed url will be "https://asia-prod.app.com".<br />
<br />
Ansible use Jinja2 for formatting, so you can apply the split filter (<a href="https://jinja.palletsprojects.com/en/2.10.x/templates/#filters">https://jinja.palletsprojects.com/en/2.10.x/templates/#filters</a>).<br />
<br />
<br />Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-56729801134285904792019-08-04T15:41:00.003-04:002019-08-04T15:41:51.135-04:00AWS CDK Python - Sample ProjectThis blog shows you how to create a sample project using AWS CDK Python. If you don't have the environment setup yet, check out my previous blog: https://www.lixu.ca/2019/08/aws-cdk-python-how-to-get-started.html<br />
<br />
<h4>
Create the Python Application</h4>
1. Fire up you Python3 virtual environment, create a project directory contains your Python project source code<br />
<i>$ source cdk-env/bin/activate</i><br />
<i>(cdk-venv) $ mkdir sample-dev</i><br />
<i>(cdk-venv) $ cd sample-dev</i><br />
<i>(cdk-venv) $ cdk init</i><br />
<i>Available templates:</i><br />
<i>* app: Template for a CDK Application</i><br />
<i> └─ cdk init app --language=[csharp|fsharp|java|javascript|python|typescript]</i><br />
<i>* lib: Template for a CDK Construct Library</i><br />
<i> └─ cdk init lib --language=typescript</i><br />
<i>* sample-app: Example CDK Application with some constructs</i><br />
<i> └─ cdk init sample-app --language=[javascript|python|typescript]</i><br />
<br />
2. The return information from cdk is very self-explanatary, we can choose "sample-app" so by the creation of this app, we have some exisitng constructs<br />
<i>$ cdk init --language python sample-app</i><br />
<i>Applying project template sample-app for python</i><br />
<i>Initializing a new git repository...</i><br />
<i>Executing Creating virtualenv...</i><br />
<i>...</i><br />
<br />
3. The entire sample project looks like this:<br />
<i>.</i><br />
<i>├── README.md - Readme file</i><br />
<i>├── app.py - Defines app stacks, also the "main" file</i><br />
<i>├── cdk.json - Configuration file for CDK that defines what executable CDK should run to generate CDK construct tree</i><br />
<i>├── hello</i><br />
<i>│ ├── __init__.py - To make a Python module</i><br />
<i>│ ├── hello.egg-info</i><br />
<i>│ │ ├── PKG-INFO</i><br />
<i>│ │ ├── SOURCES.txt</i><br />
<i>│ │ ├── dependency_links.txt</i><br />
<i>│ │ ├── requires.txt</i><br />
<i>│ │ └── top_level.txt</i><br />
<i>│ ├── hello_construct.py - A custom CDK construct defined for use in your CDK application.</i><br />
<i>│ └── hello_stack.py - A custom CDK stack construct for use in your CDK application</i><br />
<i>├── requirements.txt - Required Python modules</i><br />
<i>├── setup.py - Defines how this Python package would be constructed and what the dependencies are</i><br />
<i>└── tests - Test folder</i><br />
<i> ├── __init__.py</i><br />
<i> └── unit - ALl the unit tests</i><br />
<i> ├── __init__.py</i><br />
<i> └── test_hello_construct.py</i><br />
<span style="white-space: pre;"> </span><br />
You can take time to read and understand the source code, but esentially what this code does is to create two CloudFormation templates: "hello-cdk-1" and "hello-cdk-2", and deploy them to "us-east-1" and "us-west-1" regions. Each template includes:<br />
<br />
<ul>
<li>IAM user with predefined policies</li>
<li>AWS sqs queue</li>
<li>AWS sns topic and subscribe to the sqs queue</li>
<li>Few AWS S3 buckets</li>
</ul>
<br />
<br />
4. Give a read of the README.md file, you can ignore the Python virtual env part, as we are already using a virtual env. As the README.md suggest, we going to install all required Python packages and run some unit tests.<br />
<i>(cdk-venv) $ pip install -r requirements.txt</i><br />
<i>Collecting pytest (from -r requirements.txt (line 2))</i><br />
<i> Downloading https://files.pythonhosted.org/packages/69/1d/2430053122a3c6106f7fd1ff0bc68eb73e27db8f951db70fcd942da52c7b/pytest-5.0.1-py3-none-any.whl (221kB)</i><br />
<i> 100% |████████████████████████████████| 225kB 4.5MB/s </i><br />
<i><br /></i>
<i>(cdk-venv) $ pytest</i><br />
<i>================================================================================================================= test session starts</i><br />
<i>=================================================================================================================</i><br />
<i>platform darwin -- Python 3.7.1, pytest-5.0.1, py-1.8.0, pluggy-0.12.0</i><br />
<i>rootdir: /Users/txu/code/flashhop-dev</i><br />
<i>collected 1 item </i><br />
<i><br /></i>
<i>tests/unit/test_hello_construct.py . </i><br />
<br />
5. Generate AWS CloudFormation template<br />
<i>(cdk-venv) $ cdk synth</i><br />
<i>Successfully synthesized to /Users/txu/code/flashhop-dev/cdk.out</i><br />
<i>Supply a stack name (hello-cdk-1, hello-cdk-2) to display its template.</i><br />
<i>All the output files will be located in "cdk.out"</i><br />
<br />
6. Deploy the generated templates to AWS account and validate the right resouces got created.<br />
<i>(cdk-venv) $ cdk deploy --profile YOU_AWS_PROFILE hello-cdk-1</i><br />
<i>...</i><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgy2JmbgHvbsDAX-PSfJAsYS9PRWqRyMTbUbrIVSHid-9UgJd3lFchHAiBdJ2Xrr_SqZLa-CreCdiBky7jvTmqGAbefm9MB7z9knhtR9INMyvxAV9QmqZqBtL7PjklweWnHPbNcgg/s1600/deploy.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="392" data-original-width="1139" height="137" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgy2JmbgHvbsDAX-PSfJAsYS9PRWqRyMTbUbrIVSHid-9UgJd3lFchHAiBdJ2Xrr_SqZLa-CreCdiBky7jvTmqGAbefm9MB7z9knhtR9INMyvxAV9QmqZqBtL7PjklweWnHPbNcgg/s400/deploy.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNHeQKHL66K_Wrv1rFrJ5CsIunlTOdhpVgRYkb5IRHGvDNrqsRUPCFFSMlNdH8A5kN6WvbL2Ah1gEBlxRQ0yNyyFHNWMFii1dA35GWCskTkmz9Xxxj0Z0NPlycqS3Igp9QvOwSow/s1600/sqs-queue.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="166" data-original-width="459" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNHeQKHL66K_Wrv1rFrJ5CsIunlTOdhpVgRYkb5IRHGvDNrqsRUPCFFSMlNdH8A5kN6WvbL2Ah1gEBlxRQ0yNyyFHNWMFii1dA35GWCskTkmz9Xxxj0Z0NPlycqS3Igp9QvOwSow/s400/sqs-queue.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpJvfG4qqoN8R4FIWqLU3R4MO099bZK0d0kR1tGwA0UFL8sUXlfIRW8dc9zXrq9prudPtsWJ8Dpb5JshyMiPWv13JzWSs8WBuiarUQnvuBG2QdwnimJnoo1_Xhu2ql1in2hk47qA/s1600/sns-topic.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="253" data-original-width="545" height="183" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpJvfG4qqoN8R4FIWqLU3R4MO099bZK0d0kR1tGwA0UFL8sUXlfIRW8dc9zXrq9prudPtsWJ8Dpb5JshyMiPWv13JzWSs8WBuiarUQnvuBG2QdwnimJnoo1_Xhu2ql1in2hk47qA/s400/sns-topic.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGGE6eyZK1P6j45_9Vgu18rUnkcI33nToH5ACnLhwV1ctaJCMZfopeWX3RLw8nnJs9CcX3vVlVKYEgLQojBF367Ejcv1N8nEJ4qkgTQkjhW05_W22R_HGDunesToLf0vvD-PqQpg/s1600/s3-bucket.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="240" data-original-width="581" height="163" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGGE6eyZK1P6j45_9Vgu18rUnkcI33nToH5ACnLhwV1ctaJCMZfopeWX3RLw8nnJs9CcX3vVlVKYEgLQojBF367Ejcv1N8nEJ4qkgTQkjhW05_W22R_HGDunesToLf0vvD-PqQpg/s400/s3-bucket.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
7. To destroy the stack<br />
<i>(cdk-venv) $ cdk --profile flashhop-dev destroy hello-cdk-1</i><br />
<i>Are you sure you want to delete: hello-cdk-1 (y/n)? y</i><br />
<i>hello-cdk-1: destroying...</i><br />
<i><br /></i>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJx4OObkRv4AAa4ke7mo5YBgHlh1m5T9LEIqUO8l3qdDrFIA8zYTSArsbvgC-Rq7L8YrnNnGmyGgGrkzUxQqypsCBrj58S3tydk_H1jv2-5EhjOLMrTec1soQVBUvCeFbeppQNjg/s1600/destroy.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="275" data-original-width="728" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJx4OObkRv4AAa4ke7mo5YBgHlh1m5T9LEIqUO8l3qdDrFIA8zYTSArsbvgC-Rq7L8YrnNnGmyGgGrkzUxQqypsCBrj58S3tydk_H1jv2-5EhjOLMrTec1soQVBUvCeFbeppQNjg/s400/destroy.png" width="400" /></a></div>
<br />At this point, you learned how to use AWS CDK CLI to initialize a new Python project/app, how to synthesize hte Python code into AWS Cloudformation templates, deploy them, provision your infrastructure and destroy your infrastructure. You can learn more from AWS CDK developer documentations (<a href="https://docs.aws.amazon.com/cdk/latest/guide/home.html">https://docs.aws.amazon.com/cdk/latest/guide/home.html</a>)Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-3714567309359427112019-08-04T14:37:00.001-04:002019-08-04T14:46:28.104-04:00AWS CDK Python - How To Get StartedAssuming you are already familiar with the concept of "IaC" (Infrastructure As Code), well in AWS context, "IaC" means CloudFormation.<br />
<br />
CloudFormation allows you to define your AWS infrastructure in JSON or YAML files that can be managed witin your source code repository (Git for example). You can do pull requests and code reviews. When everything looks good, you can use these files as input into an automated process (CI/CD) that deploys your infrastructure changes.<br />
<br />
So AWS Python CDK builds on AWS CloudFormation and uses it as the engine for provisioning AWS resources. It allows you to compose new abstractions that hide details and simplify common use cases, then it packages the code up as a library in Python. This blog shows you how to get sarted with AWS Python CDK.<br />
<br />
<h4>
Prerequisites</h4>
<ul>
<li>Node.js (>= 8.11.x): Why? AWS CDK is developed in TypeScript and transpiled to JavaScript. Bindings for Python make use of the AWS CDK back-end running on Node.js, as does the cdk command-line tool.</li>
<li>Your AWS profile and credentials</li>
<li>Python >= 3.6</li>
</ul>
<div>
<h4>
Install AWS Python CDK</h4>
<div>
1. Check npm version</div>
<pre class="brush"><div>
$ npm -v
6.10.0</div>
</pre>
2. Install AWS CDK
<pre class="brush"><div>
$ npm install -g aws-cdk</div>
/usr/local/bin/cdk -> /usr/local/lib/node_modules/aws-cdk/bin/cdk
> core-js@2.6.9 postinstall /usr/local/lib/node_modules/aws-cdk/node_modules/core-js
> node scripts/postinstall || echo "ignore"
Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library!
The project needs your help! Please consider supporting of core-js on Open Collective or Patreon:
> https://opencollective.com/core-js
> https://www.patreon.com/zloirock
Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -)
+ aws-cdk@1.3.0
</pre>
3. Check CDK version
<pre class="brush">
$ cdk --version
1.3.0 (build bba9914)
</pre>
4. Check Python version
<pre class="brush">
$ python3.7 -V
Python 3.7.1
</pre>
5. Create a Python virtual env
<pre class="brush">
$ python3.7 -m venv cdk-venv
$ source cdk-venv/bin/activate
(cdk-venv) $
</pre>
6. Updating Python Dependencies
<pre class="brush">
(cdk-venv) $ which pip
/Users/txu/code/cdk-venv/bin/pip
(cdk-venv) $ pip install --upgrade aws-cdk.cdk
Collecting aws-cdk.cdk
Downloading https://files.pythonhosted.org/packages/45/77/07f23d943aece234c91230844595f5495ae6ef5be668b2c9e818ee479ef3/aws_cdk.cdk-0.36.1-py3-none-any.whl
Collecting publication>=0.0.3 (from aws-cdk.cdk)
...
Installing collected packages: publication, typing-extensions, mypy-extensions, attrs, cattrs, six, python-dateutil, jsii, aws-cdk.cdk
Successfully installed attrs-19.1.0 aws-cdk.cdk-0.36.1 cattrs-0.9.0 jsii-0.13.4 mypy-extensions-0.4.1 publication-0.0.3 python-dateutil-2.8.0 six-1.12.0 typing-extensions-3.7.4
</pre>
<div>
7. Create AWS profiles (If you don't have one already)</div>
<pre class="brush">
(cdk-venv) $ aws configure --profile test-dev
AWS Access Key ID [None]: xxxxx
AWS Secret Access Key [None]: xxxx
Default region name [None]: ca-central-1
Default output format [None]: json
</pre>
8. Check CDK version
<pre class="brush">
(cdk-venv) $ cdk --version
1.3.0 (build bba9914)
</pre>
<div>
At this point, you have successfully installed AWS CDK for Python and connected your Python with AWS CDK.</div>
<div>
<br /></div>
<div>
In our next blog, we will create a example app stack.</div>
</div>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-23715745307514489462019-07-21T16:51:00.001-04:002019-07-21T16:54:18.841-04:00AWS EC2 - Install and Configure virtualenvwrapper with Python3virtualenvwrapper is a very nice wrapper of Python virtualenv tool. It includes wrappers for creating and deleting virtual environments and otherwise managing your development workflow, making it easier to work on more than one project at a time without introducing conflicts in their dependencies. I really like that how easily it allows you to switch between different projects and virtual environments, saves you a lot of time. This blog shows you how to install and configure it on a AWS EC2 instance, using Python3<br />
<br />
<h4>
<b>Install Python3</b></h4>
<pre class="brush:bash">$ sudo yum install python3
$ pip3 -V
$ pip3 -V
pip 9.0.3 from /usr/lib/python3.7/site-packages (python 3.7)</pre>
<h4>
Install virtualenvwrapper globally</h4>
<pre class="brush:bash">$ sudo pip3 install virtualenvwrapper
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Collecting virtualenvwrapper
...
Installing collected packages: virtualenv, virtualenv-clone, pbr, six, stevedore, virtualenvwrapper
Running setup.py install for virtualenvwrapper ... done
Successfully installed pbr-5.4.1 six-1.12.0 stevedore-1.30.1 virtualenv-16.6.2 virtualenv-clone-0.5.3 virtualenvwrapper-4.8.4</pre>
<h4>
Create project folder and virtual environment folder</h4>
<pre class="brush:bash">$ ll /home/ec2-user/
total 8
drwxrwxr-x 2 ec2-user ec2-user 6 Jul 21 20:27 code
drwxrwxr-x 2 ec2-user ec2-user 6 Jul 21 20:27 venvs</pre>
<h4>
Locates "virtualenvwrapper.sh"</h4>
<pre class="brush:bash">$ which virtualenvwrapper.sh
/usr/local/bin/virtualenvwrapper.sh</pre>
<h4>
Update bash profile</h4>
<pre class="brush:bash">$ vim ~/.bash_profile
Append the following:
# Configure virtualenvwrapper
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
export WORKON_HOME="/home/ec2-user/venvs"
export PROJECT_HOME="/home/ec2-user/code"</pre>
<h4>
Resource bash profile</h4>
<pre class="brush:bash">$ source ~/.bash_profile
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/premkproject
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/postmkproject
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/initialize
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/premkvirtualenv
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/postmkvirtualenv
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/prermvirtualenv
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/postrmvirtualenv
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/predeactivate
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/postdeactivate
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/preactivate
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/postactivate
virtualenvwrapper.user_scripts creating /home/ec2-user/.virtualenvs/get_env_details
</pre>
<h4>
Create a new project</h4>
<pre class="brush:bash">$ mkproject new_proj
Using base prefix '/usr'
No LICENSE.txt / LICENSE found in source
New python executable in /home/ec2-user/venvs/new_proj/bin/python3
Also creating executable in /home/ec2-user/venvs/new_proj/bin/python
Installing setuptools, pip, wheel...
done.
virtualenvwrapper.user_scripts creating /home/ec2-user/venvs/new_proj/bin/predeactivate
virtualenvwrapper.user_scripts creating /home/ec2-user/venvs/new_proj/bin/postdeactivate
virtualenvwrapper.user_scripts creating /home/ec2-user/venvs/new_proj/bin/preactivate
virtualenvwrapper.user_scripts creating /home/ec2-user/venvs/new_proj/bin/postactivate
virtualenvwrapper.user_scripts creating /home/ec2-user/venvs/new_proj/bin/get_env_details
Creating /home/ec2-user/code/new_proj
Setting project for new_proj to /home/ec2-user/code/new_proj
(new_proj) [ec2-user@ip-172-31-36-216 new_proj]$ ls ~/venvs/
new_proj
(new_proj) [ec2-user@ip-172-31-36-216 new_proj]$ ls ~/code/
new_proj
</pre>
<br />
You can see that a "new_proj" folder inside "~/code" and a new virtual environment "new_proj" inside "~/venvs" are created automatically and the virtual environment of "new_proj" automatically activated. Let's create a second project and show how easily we can switch between projects and their virtual environments.<br />
<pre class="brush:bash">$ mkproject new_proj2
...
Creating /home/ec2-user/code/new_proj2
Setting project for new_proj2 to /home/ec2-user/code/new_proj2</pre>
<h4>
Switch to "new_proj" and its virtual env:</h4>
<pre class="brush:bash">(new_proj2) [ec2-user@hostname new_proj2]$ workon new_proj
(new_proj) [ec2-user@hostname new_proj]$
</pre>
You can see how easily you are able to switch between projects without deactivate, reactivate, switch folder, ..etc.!<br />
<h4>
You can list all the exisitng virtual environments:</h4>
<pre class="brush:bash"># List environments
$ workon
new_proj2
new_proj</pre>
<br />
More information, checkout <a href="https://virtualenvwrapper.readthedocs.io/en/latest/">https://virtualenvwrapper.readthedocs.io/en/latest/</a>Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-35025586756182987342019-05-05T11:00:00.004-04:002019-05-05T11:02:55.896-04:00Datadog- Artifactory JMX MonitoringDatadog JMX integration collects metrics from applications that expose JMX metrics. A lightweight Java plugin named "JMXFetch" is called by the Datadog Agent to connect to the MBean server and to collect metrics, it can also send service checks that report on the status of your monitored instances.<br />
<br />
This plugin sends metrics to the Datadog Agent using DogStatsD server running within the Agent. In the blog we will show you how to collect and send metrics from Artifactory application.<br />
<br />
<h4>
Prerequisites:</h4>
<ol>
<li>You have a running Artifactory application with JMX enabled (<a href="https://jfrog.com/knowledge-base/how-do-i-use-jmx-remote-monitoring/">https://jfrog.com/knowledge-base/how-do-i-use-jmx-remote-monitoring/</a>)</li>
<li>You have a running Datadog agent which has access to your Artifactory JMX port</li>
</ol>
<br />
<h4>
Configuration:</h4>
You need to enable JMX monitoring in Datadog by creating a file "/etc/datadog-agent/conf.d/jmx.d/conf.yaml" with the following configuration:<br />
<br />
<pre config="brush:bash">init_config:
new_gc_metrics: true
instances:
- host: localhost # If DD agent is running on the same host
port: 9010
user: <USER_NAME>
password: <PASSWORD>
name: artifactory-app
</pre>
<pre config="brush:bash"> tags:
env: pord
</pre>
<pre config="brush:bash"> conf:
- include:
domain: org.jfrog.artifactory
<span style="white-space: pre;"> </span>bean:
<span style="white-space: pre;"> </span> - org.jfrog.artifactory:instance=Artifactory, type=Storage,prop=Binary Storage
<span style="white-space: pre;"> </span> - org.jfrog.artifactory:instance=Artifactory, type=Repositories,prop=test-repo
<span style="white-space: pre;"> </span>attribute:
<span style="white-space: pre;"> </span> # Ginary storage size
<span style="white-space: pre;"> </span> - Size
<span style="white-space: pre;"> </span> # Repo
<span style="white-space: pre;"> </span> - ArtifactsTotalSize
<span style="white-space: pre;"> </span> - ArtifactsCount
- include:
domain: Catalina
<span style="white-space: pre;"> </span>bean:
<span style="white-space: pre;"> </span> - Catalina:type=ThreadPool,name="http-nio-8081"
<span style="white-space: pre;"> </span>attribute:
<span style="white-space: pre;"> </span> - maxThreads
<span style="white-space: pre;"> </span> - currentThreadCount
<span style="white-space: pre;"> </span> - currentThreadBusy
<span style="white-space: pre;"> </span>
</pre>
This following graph shows you a visual mapping between Mbeans and the actual Datadog configuration.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgT-nMN3azkFruKJSIXWEVHe8gvtKmrLxNvnz4Uiq4Exi77UoOh_bX5QhyphenhyphenVAw7ekgd6bnIrxp9zdB-kGEx9w4yMNzyfp2KPLq9RoyC5dvQ6dXt9984EqVi8lFf5MjpwIb1dSSQKyg/s1600/Screen+Shot+2019-05-05+at+10.25.26+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="549" data-original-width="998" height="352" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgT-nMN3azkFruKJSIXWEVHe8gvtKmrLxNvnz4Uiq4Exi77UoOh_bX5QhyphenhyphenVAw7ekgd6bnIrxp9zdB-kGEx9w4yMNzyfp2KPLq9RoyC5dvQ6dXt9984EqVi8lFf5MjpwIb1dSSQKyg/s640/Screen+Shot+2019-05-05+at+10.25.26+AM.png" width="640" /></a></div>
<br />
<br />
You can also use the new way of MBean configuration, as described following (https://docs.datadoghq.com/integrations/java/):<br />
<br />
<pre config="brush:bash"> conf:
- include:
domain: org.apache.cassandra.db
attribute:
- BloomFilterDiskSpaceUsed
- BloomFilterFalsePositives
- BloomFilterFalseRatio
- Capacity
- CompressionRatio
- CompletedTasks
- ExceptionCount
- Hits
- RecentHitRate
<span style="white-space: pre;"> </span>
</pre>
But I couldn't get the new confiuration to work as the old one yet, the new one looks much easier to config, but it didn't work for me, for some reason it is not sending metrics to Datadog platform.<br />
<br />
You could use "datadog-agent jmx list collected" command to check what JMX metrics that you are collecting.<br />
<br />
<h4>
Note1:</h4>
To run more than one JMX check, create configuration files with the format jmx_<INDEX>.d/conf.yaml (e.g. jmx_1.d/conf.yaml, jmx_2.d/conf.yaml, etc). Each folder should be stored in the conf.d directory. Include the is_jmx option set to true in those configuration files.<br />
<br />
<h4>
The 350 metric limit</h4>
Due to the nature of these integrations, it is possible to submit an extremely high number of metrics directly to Datadog. Many customers agree that some of these metrics are not needed. Therefore, Datadog has set the limit at 350 metrics.Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-45430185039269946182019-04-28T20:53:00.000-04:002019-04-28T20:53:01.769-04:00Logstash - AWS S3 Bucket As Data InputYou can use the "S3 input plugin" to stream events from files from a AWS S3 bucket. Each line from each file generates an event. Files ending in ".gz" are handled as gzip'ed files. Glacier files will be skipped.<br />
<br />
Logstash version: 6.7.1<br />
<br />
Here is a basic configuration for streaming data:<br />
<br />
<pre class="brush:bash">input {
s3 {
"access_key_id" => "1234"
"secret_access_key" => "secret"
"bucket" => "logstash-test-aws-s3-bucket"
"additional_settings" => {
"force_path_style" => true
"follow_redirects" => false
}
"region" => "us-east-1"
"prefix" => "logstash-"
"type" => "s3"
}
}
output {
elasticsearch {
cacert => "/path/to/cert"
hosts => "https://elasticsearch1.com:9243"
index => "test-index-%{+YYYY.MM}"
user => "logstash"
password => "logstash"
}
}
</pre>
<br />
Files in this "logstash-test-aws-s3-bucket" AWS S3 bucket start with "logstash-" will match (including folders).<br />
<br />
Start your logstash with "logstash -f confg/s3-input-logstash.conf", you should start seeing data coming into your Elasticsearch cluster.Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-73064155058448871022019-04-28T20:22:00.001-04:002019-04-28T20:23:08.536-04:00Logstash - Send Data To Multiple Elastic ClustersIt is possible to send same data from one Logstash server to multiple Elasticsearch clusters. It is easy to configure, you just need to have multiple outputs definitions for the same events. However, it is not easy to find a clear example from the internet, so I've decided to make this blog.<br />
<h4>
My example Logstash configuration file:</h4>
<pre class="brush:bash">input {
file {
path => ["/path/to/json/file"]
start_position => "beginning"
sincedb_path => "/dev/null"
exclude => "*.gz"
}
}
filter {
mutate {
replace => [ "message", "%{message}" ]
gsub => [ 'message','\n','']
}</pre>
<pre class="brush:bash"> if [message] =~ /^{.*}$/ {
json { source => message }
}
}
output {
elasticsearch {
cacert => "/path/to/cert"
<span style="white-space: pre;"> </span>hosts => "https://elasticsearch1.com:9243"
<span style="white-space: pre;"> </span>index => "test-index-%{+YYYY.MM}"
<span style="white-space: pre;"> </span>user => "logstash"
<span style="white-space: pre;"> </span>password => "logstash"
}
elasticsearch {
cacert => "/path/to/cert"
<span style="white-space: pre;"> </span>hosts => "https://elasticsearch2.com:9243"
<span style="white-space: pre;"> </span>index => "test-index-%{+YYYY.MM}"
<span style="white-space: pre;"> </span>user => "logstash"
<span style="white-space: pre;"> </span>password => "logstash"
}
}</pre>
<h4>
Example of JSON file:</h4>
<pre class="brush:bash">{"foo":"bar", "bar": "foo"}
{"hello":"world", "goodnight": "moon"}
</pre>
<br />
Note the JSON file content need to be in one line.<br />
<br />
<b>This setup is an all or nothing, if one of the output is down, the second will not work. You should think do you really need this setup at the first place, now you have two copy of data that you need to keep in sync, maybe output to one place, and have the role based control is a better option.</b>Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-54399703201258311312019-03-03T16:14:00.005-05:002019-03-03T16:16:51.767-05:00Octopus Deploy - How To Install Octopus Deploy In AWS EC2<br />
"Octopu Deploy" is an industry leading automated deployment and release management tool used by leading continuous delivery teams worldwide. It is designed to simplify deployment of .NET applications, Windows Services and databases.<br />
<br />
This blog shows you how to install the lastest LTS version of "Octopus Deploy" in AWS EC2.<br />
<br />
<h4>
Environment:</h4>
<br />
<ul>
<li>Octopus Deploy: 2018.10.5 LTS</li>
<li>EC2: t2.large</li>
<li>OS: "Windows Server 2019 Datacenter"</li>
</ul>
<h4>
Installation Steps:</h4>
<br />
<div>
1. Launch a Windows EC2 instance</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyHw0Ael8pwNTiaLXYxYQfuTt8QjgOqzctc0h8H7aTf3KAi5AYV5spWFQOol3JkAcAZHTTLXbSxlvJgGPWpBmawE2nszM61W847UNYQdUJiwnd0uHVE9r7D1iswAt1a_sHup6kKQ/s1600/1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="87" data-original-width="794" height="68" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyHw0Ael8pwNTiaLXYxYQfuTt8QjgOqzctc0h8H7aTf3KAi5AYV5spWFQOol3JkAcAZHTTLXbSxlvJgGPWpBmawE2nszM61W847UNYQdUJiwnd0uHVE9r7D1iswAt1a_sHup6kKQ/s640/1.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: left;">
2. Select "General purpose", "t2.large"</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjweiRTRYJPgQ7uQcObyOlU-tyTt3aP8T-V2wP7mRL2GYCGvNcnfdm30wvCxDDxR7PEIJtaLNjJkxuRFYxwj_2yW5A48w61de_8Pdg-eITMb8PlpMJU_WH7xVO8Ye43B6SHH1cNDQ/s1600/2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="46" data-original-width="656" height="44" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjweiRTRYJPgQ7uQcObyOlU-tyTt3aP8T-V2wP7mRL2GYCGvNcnfdm30wvCxDDxR7PEIJtaLNjJkxuRFYxwj_2yW5A48w61de_8Pdg-eITMb8PlpMJU_WH7xVO8Ye43B6SHH1cNDQ/s640/2.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: left;">
3. Configure Storage</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggXT2UZITyUO00p4MOgpdqBjdRYJ_-PrNvFMF4ACj-78XSGaD8QQnr9Aq4R2jO8tV-pYfuB9INOUQqv7Lcud5SuOf75zD_Kt5RixPwAD7rL24Lxr1yCDnJHRVtX3N7ZtShQDuElw/s1600/3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="119" data-original-width="765" height="98" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggXT2UZITyUO00p4MOgpdqBjdRYJ_-PrNvFMF4ACj-78XSGaD8QQnr9Aq4R2jO8tV-pYfuB9INOUQqv7Lcud5SuOf75zD_Kt5RixPwAD7rL24Lxr1yCDnJHRVtX3N7ZtShQDuElw/s640/3.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both;">
4. Configure tags</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieGRfDBqf13c0v4oMPHGW_UiBAnLhjZLVrRRRW0hQO6f-2KxUxknkqyzomDBKL8lx5qlMxDLL_5sM_05TEtWBpUKbhqpCfqVX99i_HOaeDCy1fuJtXTCI4G7CTvjjVpcZuuTMnqg/s1600/4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="196" data-original-width="801" height="156" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieGRfDBqf13c0v4oMPHGW_UiBAnLhjZLVrRRRW0hQO6f-2KxUxknkqyzomDBKL8lx5qlMxDLL_5sM_05TEtWBpUKbhqpCfqVX99i_HOaeDCy1fuJtXTCI4G7CTvjjVpcZuuTMnqg/s640/4.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: left;">
5. Configure security group</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwgHrdLjgQ6d6WhXKQUDaLvOMpV8cD0yMSMWacrC1ZekBER8nhZplDEdUcsJwM5_K4yo7WnLzk8fyEESZpPbLZ1-IsOeXT5-Yh8Akx9AmZlB36iy6D8zhE2jCkTeIM0gbSjPM0Vg/s1600/5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="187" data-original-width="790" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwgHrdLjgQ6d6WhXKQUDaLvOMpV8cD0yMSMWacrC1ZekBER8nhZplDEdUcsJwM5_K4yo7WnLzk8fyEESZpPbLZ1-IsOeXT5-Yh8Akx9AmZlB36iy6D8zhE2jCkTeIM0gbSjPM0Vg/s640/5.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: left;">
6. Once the instance is up and running, RDP into the server, you can get the connection details from "Connect" button.</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhutEzukxrr9xxwVyxl3gw1vfmXhjIMuwectdnOoKUb9NhNJF-u3J9BlJfb-x0XXSR07f5VNrGfMmEH7dlE9-amh461P1pdmsM8tBlsXtUkJ61BvPzsd1pziMZC8sFMYmA0iE621g/s1600/6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="306" data-original-width="618" height="316" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhutEzukxrr9xxwVyxl3gw1vfmXhjIMuwectdnOoKUb9NhNJF-u3J9BlJfb-x0XXSR07f5VNrGfMmEH7dlE9-amh461P1pdmsM8tBlsXtUkJ61BvPzsd1pziMZC8sFMYmA0iE621g/s640/6.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: left;">
7. Open up IE in the remote server, download "Octopus Deploy 2018.10.5 LTS"</div>
<div class="separator" style="clear: both; text-align: left;">
<b>Note:</b> by default, the stupid "Internet Explorer Enhanced Security Configuration" is on, you need to turn it off so you can download things.</div>
<div style="text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZElOCasp1tftvTYOEJKxOJCS0_DpPEWPNysxk8PJMSX4WZueR5VkYQpPcOvWjLQzA91IgjcUwSMPMnDE2yD2RCsSrDY9mm6cSx5krW2R9dPb9pFELoJpt36CncKt47hI6OVo2XA/s1600/7-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="284" data-original-width="726" height="156" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZElOCasp1tftvTYOEJKxOJCS0_DpPEWPNysxk8PJMSX4WZueR5VkYQpPcOvWjLQzA91IgjcUwSMPMnDE2yD2RCsSrDY9mm6cSx5krW2R9dPb9pFELoJpt36CncKt47hI6OVo2XA/s400/7-2.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigr6fKT12-IeYowP05wXEz5IKypZdlaAh1TopOP5R1hdbZRFCtg1Do_Sj6EoJYVO7qzLCr-siPVyEA7PQmof47xwIcsCFZiz4n81GU4kd3g5kLeXEsDzXp5HDWnUPTG8sEp4SPAA/s1600/7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="341" data-original-width="639" height="212" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigr6fKT12-IeYowP05wXEz5IKypZdlaAh1TopOP5R1hdbZRFCtg1Do_Sj6EoJYVO7qzLCr-siPVyEA7PQmof47xwIcsCFZiz4n81GU4kd3g5kLeXEsDzXp5HDWnUPTG8sEp4SPAA/s400/7.png" width="400" /></a></div>
<br />
8. Double click to install "Octopus.2018.10.5"<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJP3QfYQOBvDSztiy67v3a0QsqkEy-1CPh6G91iz1wg_ee1Xa3947mKvV7SuFXFqbz_jCBqWRRepehuaePn3MRAi05XTk0_Cjb_Ja-c-R4cD9vfnTMR_Ym6y_gwYYhUbH69VjF3g/s1600/8-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="361" data-original-width="454" height="317" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJP3QfYQOBvDSztiy67v3a0QsqkEy-1CPh6G91iz1wg_ee1Xa3947mKvV7SuFXFqbz_jCBqWRRepehuaePn3MRAi05XTk0_Cjb_Ja-c-R4cD9vfnTMR_Ym6y_gwYYhUbH69VjF3g/s400/8-1.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikPjWnHDW5rbERAzX_CHoerUj6VhLXKyuCHjAKVt35xYpmJ1Am9GjP5yY0SeZHbAxX9qigss0l38JxgJg1ycggge8d2MoEDorqXBATGR9fgdIbB92ZGBVPF646ROPU4J3BufAhtw/s1600/8-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="364" data-original-width="465" height="312" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikPjWnHDW5rbERAzX_CHoerUj6VhLXKyuCHjAKVt35xYpmJ1Am9GjP5yY0SeZHbAxX9qigss0l38JxgJg1ycggge8d2MoEDorqXBATGR9fgdIbB92ZGBVPF646ROPU4J3BufAhtw/s400/8-2.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguEcDNsXjoWnd5FdXhyphenhyphen3JJMuS9M7hIpmbO7rE-B_RCrKkDJBKg9bCZrwE82iqHi8A0fYdHYIbH2XFE79-G7ahgjdRp4WWC19gKkSYMAbzZjAf1WJER2gAY809i-eYVbmzozAtN1Q/s1600/8-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="358" data-original-width="460" height="311" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguEcDNsXjoWnd5FdXhyphenhyphen3JJMuS9M7hIpmbO7rE-B_RCrKkDJBKg9bCZrwE82iqHi8A0fYdHYIbH2XFE79-G7ahgjdRp4WWC19gKkSYMAbzZjAf1WJER2gAY809i-eYVbmzozAtN1Q/s400/8-3.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhITqMGAyfbRAgupJUDv3PsWl874Aq6rJCldNZHmfFe-elm667PKHzgloxgBUTzqIDvFB05Mco5DJ40-xSVC9IcrqzCgwspYQYqqG6hBVILuT-j_HLgbbFAMqrZcYPawxpT_4dgpw/s1600/8-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="359" data-original-width="462" height="310" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhITqMGAyfbRAgupJUDv3PsWl874Aq6rJCldNZHmfFe-elm667PKHzgloxgBUTzqIDvFB05Mco5DJ40-xSVC9IcrqzCgwspYQYqqG6hBVILuT-j_HLgbbFAMqrZcYPawxpT_4dgpw/s400/8-4.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUF_SnfPscQimSdRRqak7zwJJGUzeJSdDt_b8-87PAQhWZJFd_BS5_1BR2aETmJT-yDVgoauAZxNjOm_W7-C1DcfjrqBGd0kPK_PXL4w9fzhhIcvLAqNMOgrj5ci6Fh2o36B-3fg/s1600/8-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="360" data-original-width="462" height="311" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUF_SnfPscQimSdRRqak7zwJJGUzeJSdDt_b8-87PAQhWZJFd_BS5_1BR2aETmJT-yDVgoauAZxNjOm_W7-C1DcfjrqBGd0kPK_PXL4w9fzhhIcvLAqNMOgrj5ci6Fh2o36B-3fg/s400/8-5.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
9. Complete the Octopus Deploy Server Setup Wizard</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1Lh4wd1-5vmPLosNFCNNLlskoI4ic9xcHnUBkCnmB2ndHK4aI6SQCg2zecinQpmy80WDONfeM-M93zO-fzg4oSXYzSY8uDiuEUpfSY0gJhKMwZX9qaf3xxG5s4XNlM1-b2VWn5g/s1600/9-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="664" data-original-width="923" height="287" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1Lh4wd1-5vmPLosNFCNNLlskoI4ic9xcHnUBkCnmB2ndHK4aI6SQCg2zecinQpmy80WDONfeM-M93zO-fzg4oSXYzSY8uDiuEUpfSY0gJhKMwZX9qaf3xxG5s4XNlM1-b2VWn5g/s400/9-1.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEN1qexPfG_ukHcp88Wr8WL669HilCngfwE7rl7j4rvpCyWvYqbkSaUH6OSqrXA4ZQXKFSV83u4NgygyO3_IF1xsPEmnGT80kqZByVVxsRcCiyhs_gASlbBPXdLDQBCjnAH9HJCA/s1600/9-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="548" data-original-width="818" height="267" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEN1qexPfG_ukHcp88Wr8WL669HilCngfwE7rl7j4rvpCyWvYqbkSaUH6OSqrXA4ZQXKFSV83u4NgygyO3_IF1xsPEmnGT80kqZByVVxsRcCiyhs_gASlbBPXdLDQBCjnAH9HJCA/s400/9-2.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkNgo_PZg7wAbr1DWawNlsyUmOjLT5K0RGPojPf9FVEzWDiFnAsPn3dSzEkFFmE_TVFcMJNPPzsE0z0c435gAd1oKt02BKhpzJqyBhqde142Wij3u1ftvcGgVOVF2_L4kj-4biuw/s1600/9-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="552" data-original-width="817" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkNgo_PZg7wAbr1DWawNlsyUmOjLT5K0RGPojPf9FVEzWDiFnAsPn3dSzEkFFmE_TVFcMJNPPzsE0z0c435gAd1oKt02BKhpzJqyBhqde142Wij3u1ftvcGgVOVF2_L4kj-4biuw/s400/9-3.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjD873fanBHYALyK4fsnUTeoFUdNaXUrq982ZeY_iM041SYZ7ueH4Erd1HI2mzVyqJkmc_MMJefSCSFK9OnOzN4yr0C0UZBfQ4eR4wYRcfpe4jzKZTjOimizgH0KD8nHS7o-Gqe5A/s1600/9-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="551" data-original-width="819" height="268" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjD873fanBHYALyK4fsnUTeoFUdNaXUrq982ZeY_iM041SYZ7ueH4Erd1HI2mzVyqJkmc_MMJefSCSFK9OnOzN4yr0C0UZBfQ4eR4wYRcfpe4jzKZTjOimizgH0KD8nHS7o-Gqe5A/s400/9-4.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi13zDoHR7ICvijbdIj3z9T13dgHlYr53df7EbjNw0LKv6fDmHb0VJqDAwTpsQhBMQPF834Bu6le30f60BfIq_iLPb-vTk2HYeEYGIfKZZZnbc1w1v50C6s6pBWVAkYvRVxZ8PRpQ/s1600/9-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="553" data-original-width="820" height="268" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi13zDoHR7ICvijbdIj3z9T13dgHlYr53df7EbjNw0LKv6fDmHb0VJqDAwTpsQhBMQPF834Bu6le30f60BfIq_iLPb-vTk2HYeEYGIfKZZZnbc1w1v50C6s6pBWVAkYvRVxZ8PRpQ/s400/9-5.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Note: You will need SQL Database for Octopus Deploy, if you don't have one, you can download the "SQL Server Express (free)" from Octopus "Database" installation window.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
10. Install SQL server express basic</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmy8vI-rrzacyHwDbZ9Vfb3-fX0aQiv6drNwuQ6VSrtXNjtPOgTmF_uDPhQ6ATUIsjxMvDZPwzVGLeMA70YikA7WmL9moowvfK103zsVlvjKNKLQ-sixrTAm-_nX2l3R9QXpvuzw/s1600/10-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="624" data-original-width="791" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmy8vI-rrzacyHwDbZ9Vfb3-fX0aQiv6drNwuQ6VSrtXNjtPOgTmF_uDPhQ6ATUIsjxMvDZPwzVGLeMA70YikA7WmL9moowvfK103zsVlvjKNKLQ-sixrTAm-_nX2l3R9QXpvuzw/s400/10-1.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIWNLyl5zmwgspH1MUhKu2qkyf-bwfafd8UPKpvFISZy2qZD-_BdvySGocoTE5zwzvkvCgYwust1G3dw8zBHiHAshSNVp498PAyOmxKjpP1A6URhyphenhyphenB-MdxNU3URFuGbTNOJDHWlA/s1600/10-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="625" data-original-width="790" height="316" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIWNLyl5zmwgspH1MUhKu2qkyf-bwfafd8UPKpvFISZy2qZD-_BdvySGocoTE5zwzvkvCgYwust1G3dw8zBHiHAshSNVp498PAyOmxKjpP1A6URhyphenhyphenB-MdxNU3URFuGbTNOJDHWlA/s400/10-2.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRFc8KxF6FUA7m1z3TIpFVtcSZH_NFXmsMNkOXET_sknH3872mrrzbqm2b-ycq_1mOGx6POGfU02plvB5iVQalfE_Aqilp9RTpI_ZYqc9zSt_YaYvWuQgfYuF9zZWNTbciqQlaNA/s1600/10-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="620" data-original-width="785" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRFc8KxF6FUA7m1z3TIpFVtcSZH_NFXmsMNkOXET_sknH3872mrrzbqm2b-ycq_1mOGx6POGfU02plvB5iVQalfE_Aqilp9RTpI_ZYqc9zSt_YaYvWuQgfYuF9zZWNTbciqQlaNA/s400/10-3.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizjjE8bObN6oK5jNthlkpCHtB_jcju92icr4rF6nDrfMpTfQE5c8deIzIgWEJdICyTbTA3zsTulfqSpn7f-CllgghNrp6RqJGxzOVxoFkJqNm9JYz7G4bnuGdlz0GHbyAFrs61dQ/s1600/10-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="621" data-original-width="790" height="313" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizjjE8bObN6oK5jNthlkpCHtB_jcju92icr4rF6nDrfMpTfQE5c8deIzIgWEJdICyTbTA3zsTulfqSpn7f-CllgghNrp6RqJGxzOVxoFkJqNm9JYz7G4bnuGdlz0GHbyAFrs61dQ/s400/10-4.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
11. Connect to local "SQLEXPRESS" service, and create a "Octopus" database</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXbjd4Vgk_3EmOMFetxVXzHzDZUcukVAxMnuCaQpUhY1CJ-UrfzB8zQFunqpKmEg6Gq7wO84vESLEz93EDXFES9YOlPU8sBxH2ZFdiqHngTBXHrLRxOkMJ4znb94NA0o9Emc1MIw/s1600/11-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="555" data-original-width="812" height="272" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXbjd4Vgk_3EmOMFetxVXzHzDZUcukVAxMnuCaQpUhY1CJ-UrfzB8zQFunqpKmEg6Gq7wO84vESLEz93EDXFES9YOlPU8sBxH2ZFdiqHngTBXHrLRxOkMJ4znb94NA0o9Emc1MIw/s400/11-1.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKTf3uZXQbbx1zIUs7SQPBQmm_oVwnNUqtyWoxoJ8T2wQapxSkpY6-dGihi-jboN9GFEv2n9MUQpNC5yAcOaXTD8eXYWFJZqU2tw4oYmfe4jrkdt3mVdQYNcpgLDX6FqqRmahYjA/s1600/11-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="551" data-original-width="814" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKTf3uZXQbbx1zIUs7SQPBQmm_oVwnNUqtyWoxoJ8T2wQapxSkpY6-dGihi-jboN9GFEv2n9MUQpNC5yAcOaXTD8eXYWFJZqU2tw4oYmfe4jrkdt3mVdQYNcpgLDX6FqqRmahYjA/s400/11-2.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihqKowLOEp48rCScbgYHj3iQMOVk0iuCRUmQQqw84b43XRnl_GDvSgIBVlflbhE5k6qC8OcjMGX5KjK4M5l-SQMiGH6t_luBgfbW6zoiYFSvgcZWANKMAc5uBpbg_mNCt78WzuaQ/s1600/11-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="551" data-original-width="818" height="268" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihqKowLOEp48rCScbgYHj3iQMOVk0iuCRUmQQqw84b43XRnl_GDvSgIBVlflbhE5k6qC8OcjMGX5KjK4M5l-SQMiGH6t_luBgfbW6zoiYFSvgcZWANKMAc5uBpbg_mNCt78WzuaQ/s400/11-3.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
12. Finally, install.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4naLcMn67TL-kT20wwkZZixo62n3vRizYObzoE9vgkxtrt0heEAsVF-Jxf67zE3HhPD48KrFrxmnk1lLxK5JWx3mcI0pFrZ8Op6D0kz6C0R8uFwMaZIDQzcXn4eHOc1Utjk6BSA/s1600/12-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="551" data-original-width="821" height="267" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4naLcMn67TL-kT20wwkZZixo62n3vRizYObzoE9vgkxtrt0heEAsVF-Jxf67zE3HhPD48KrFrxmnk1lLxK5JWx3mcI0pFrZ8Op6D0kz6C0R8uFwMaZIDQzcXn4eHOc1Utjk6BSA/s400/12-1.png" width="400" /></a></div>
<div class="separator" style="clear: both;">
<br /></div>
<div class="separator" style="clear: both;">
13. Congratulations, now you have successfully install "Octopu Deploy" in AWS Ec2 instance!</div>
<div class="separator" style="clear: both;">
<br /></div>
<div class="separator" style="clear: both;">
Now login into your Octopus Deploy and start play around with it!</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHAxeInTEsi484iwKlU4zYTM-4YFt_tFCK2iy6V5keQPsEbBjMw7-WlGY1xdHmq_k1MIzXMrRAKyWHSmYFZMocvI9nOmgepmBjMPsqiEXfLubNNIrBSJqaljZwEZBK1Hcw2EEcaw/s1600/13-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="588" data-original-width="921" height="255" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHAxeInTEsi484iwKlU4zYTM-4YFt_tFCK2iy6V5keQPsEbBjMw7-WlGY1xdHmq_k1MIzXMrRAKyWHSmYFZMocvI9nOmgepmBjMPsqiEXfLubNNIrBSJqaljZwEZBK1Hcw2EEcaw/s400/13-1.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs3xCj3TtlYJo_D0autE7wNqUY92vshwTIOroBLFYs7XKO5PwFQC9khrj8uZJT9UEreoRGkDw7RM8ton78tdjsfaHAZvtk0vcKFV9pBsuyb0kqwilZee_FaiI0lk8j6oAT1YjRmw/s1600/14-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="602" data-original-width="879" height="273" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs3xCj3TtlYJo_D0autE7wNqUY92vshwTIOroBLFYs7XKO5PwFQC9khrj8uZJT9UEreoRGkDw7RM8ton78tdjsfaHAZvtk0vcKFV9pBsuyb0kqwilZee_FaiI0lk8j6oAT1YjRmw/s400/14-1.png" width="400" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-71472979593820594452019-02-24T20:54:00.003-05:002019-02-24T20:57:45.302-05:00ElasticSearch - Part 1 - How To Deploy Single Node Cluster On AWS EC2This blog shows you how to deploy and manage your own ElasticSeacrh cluster on AWS EC2.<br />
<h4>
</h4>
<h4>
Environment:</h4>
<ul>
<li>Instance: t2.2xlarge (32GB Mem, 8vCPUs)</li>
<li>ElasticSearch version: 6.6.1</li>
</ul>
Assuming you have all your AWS environment setup, you are ready to launch a EC2 instance and have ssh access to it.<br />
<h4>
</h4>
<h4>
Provision AWS EC2 instance:</h4>
Elasticsearch runs on various operating systems such as CentOS, Redhat, Ubuntu, and Amazon Linux. We suggest using the latest Amazon Linux AMI.<br />
<br />
Choose "t2.2xlarge" instance type, which provides 8vCPUs, 32 GB of memory and EBS volume for data, which is a reasonable starting point. Go ahead and start up the instance. Two thing to notice:<br />
<ul>
<li>1. The security group open ports:</li>
<ul>
<li>- port 22: SSH</li>
<li>- port 9200: ElasticSearch requests</li>
</ul>
<li>2. The storage:</li>
<ul>
<li>- 30GB; You can expand later</li>
</ul>
</ul>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLC3WGuBZqFgYuGwBHmJ8LGuzQqVWnFQGwAS2DgTwkU2WKR3tKJsJe57GBK5Bvvn2mOajGmb57uBBTgLraJP5MJ2BXeW9qG0NXCZkgrgKnwN5LBGXW0AWGbeNtQVtFOLaIwrdi0Q/s1600/es-setup-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="401" data-original-width="879" height="289" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLC3WGuBZqFgYuGwBHmJ8LGuzQqVWnFQGwAS2DgTwkU2WKR3tKJsJe57GBK5Bvvn2mOajGmb57uBBTgLraJP5MJ2BXeW9qG0NXCZkgrgKnwN5LBGXW0AWGbeNtQVtFOLaIwrdi0Q/s640/es-setup-1.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEji92NxIbcOxGB6o3ONKe9PIJOBybELOzTZ4Q1BEIseza-9ohMycrGfeJ01HA1BW2SuGFt3ce8GXYWc8ZXOFZKX143227Tx4J0zD4FGPQxwolm0TMV80Py4fYWLsnbS5QcjSCzAXA/s1600/es-setup-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="311" data-original-width="876" height="226" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEji92NxIbcOxGB6o3ONKe9PIJOBybELOzTZ4Q1BEIseza-9ohMycrGfeJ01HA1BW2SuGFt3ce8GXYWc8ZXOFZKX143227Tx4J0zD4FGPQxwolm0TMV80Py4fYWLsnbS5QcjSCzAXA/s640/es-setup-2.png" width="640" /></a></div>
<br />
<h4>
</h4>
<h4>
Install ElasticSearch</h4>
Once the EC2 instance is up and running, we can start ElasticSearch installation.<br />
<br />
1. Log into EC2 instance:<br />
<pre class="brush:bash">$ ssh -i ~/.ssh/tony-aws.pem ec2-user@54.1xx.1xx.xxx
$ sudo su -
$ df -h</pre>
<pre class="brush:bash">Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 472K 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/xvda1 30G 1.2G 29G 4% /
tmpfs 3.2G 0 3.2G 0% /run/user/0
tmpfs 3.2G 0 3.2G 0% /run/user/1000
</pre>
<br />
2. Install Javav1.8.0<br />
<pre class="brush:bash">$ yum install java-1.8.0-openjdk</pre>
<pre class="brush:bash">Installed:
java-1.8.0-openjdk.x86_64 1:1.8.0.191.b12-0.amzn2
</pre>
<br />
3. Download ElasticSearch 6.6.1<br />
<pre class="brush:bash">$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.1.rpm
--2019-02-25 01:02:20-- https://artifacts.elastic.co/downloads/</pre>
<pre class="brush:bash">elasticsearch/elasticsearch-6.6.1.rpm
Resolving artifacts.elastic.co (artifacts.elastic.co)... 151.101.250.222, </pre>
<pre class="brush:bash">2a04:4e42:3b::734
Connecting to artifacts.elastic.co (artifacts.elastic.co)|151.101.250.222</pre>
<pre class="brush:bash">|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 114067654 (109M) [application/octet-stream]
Saving to: ‘elasticsearch-6.6.1.rpm’
2019-02-25 01:02:22 (115 MB/s) - ‘elasticsearch-6.6.1.rpm’ </pre>
<pre class="brush:bash">saved [114067654/114067654]
</pre>
<br />
4. Install the Elasticsearch RPM package on each EC2 instance as instructed below.<br />
<pre class="brush:bash">$ rpm -iv elasticsearch-6.6.1.rpm
warning: elasticsearch-6.6.1.rpm: Header V4 RSA/SHA512 Signature, </pre>
<pre class="brush:bash">key ID d88e42b4: NOKEY
Preparing packages...
Creating elasticsearch group... OK
Creating elasticsearch user... OK
elasticsearch-0:6.6.1-1.noarch
### NOT starting on installation, please execute the following </pre>
<pre class="brush:bash">statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch
</pre>
<br />
5. By default the Elasticsearch service doesn’t log information in the systemd journal. To enable journalctl logging, the "--quiet" option must be removed from the ExecStart command line in the elasticsearch.service file.<br />
<pre class="brush:bash">$ vim /usr/lib/systemd/system/elasticsearch.service
# Remove --quiet by Tony
#ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet
ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet
</pre>
<br />
6. Configure Elasticsearch<br />
Elasticsearch defaults to using /etc/elasticsearch for runtime configuration. Elasticsearch loads its configuration from the /etc/elasticsearch/elasticsearch.yml file by default. The format of this config file is explained in Configuring Elasticsearch (<a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html">https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html</a>).<br />
<br />
<b>Update the bind host:</b><br />
<pre class="brush:bash">$ vim /etc/elasticsearch/elasticsearch.yml
locate "network.host: 192.168.0.1", and update it to
network.host: 0.0.0.0
</pre>
This opens up ElasticSearch to listen on traffic from all hosts.<br />
<br />
<b>Update the cluster name:</b><br />
<pre class="brush:bash">$ vim /etc/elasticsearch/elasticsearch.yml
locate "cluster.name: my-application", and update it to
cluster.name: tony-es-cluster</pre>
<br />
<b>Setting the heap size:</b><br />
By default, Elasticsearch tells the JVM to use a heap with a minimum and maximum size of 1 GB. However, it is important to configure heap size to ensure that Elasticsearch has enough heap available. Elasticsearch will assign the entire heap specified in jvm.options via the Xms (minimum heap size) and Xmx (maximum heap size) settings.<br />
The value for these setting depends on the amount of RAM available on the instance, a rule of thumb is "Set Xmx to no more than 50% of your physical RAM, to ensure that there is enough physical RAM left for kernel file system caches." In our case, the value is 16GB.<br />
<pre class="brush:bash">$ vim /etc/elasticsearch/jvm.options
Locate "-Xms1g -Xmx1g", and update it to
-Xms16g
-Xmx16g
</pre>
<br />
6. Start ElasticSearch<br />
<pre class="brush:bash">$ systemctl start elasticsearch.service
$ systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; </pre>
<pre class="brush:bash">disabled; vendor preset: disabled)
Active: active (running) since Mon 2019-02-25 01:32:29 UTC; 2s ago
Docs: http://www.elastic.co
Main PID: 13803 (java)
CGroup: /system.slice/elasticsearch.service
└─13803 /bin/java -Xms16g -Xmx16g -XX:+UseConcMarkSweepGC </pre>
<pre class="brush:bash">-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly </pre>
<pre class="brush:bash">-Des.networkaddress.cache.ttl=60 -Des.networkaddr.
</pre>
<br />
7. Verify from API request<br />
<pre class="brush:bash">$ curl -X GET http://54.1xx.1xx.1xx:9200/
{
"name" : "ZAvN4SU",
"cluster_name" : "tony-es-cluster",</pre>
<pre class="brush:bash"> "cluster_uuid" : "bYSZ8nkqS-mnI8x2F3eHhQ",
"version" : {
"number" : "6.6.1",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "1fd8f69",
"build_date" : "2019-02-13T17:10:04.160291Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
</pre>
<br />
Since we enabled the journal entries for ElasticSeach, you can list the entries now:<br />
<pre class="brush:bash">$ journalctl --unit elasticsearch
-- Logs begin at Mon 2019-02-25 00:58:47 UTC, end at Mon 2019-02-25 01:32:29 UTC. --
Feb 25 01:26:55 ip-172-31-88-104.ec2.internal systemd[1]: Started Elasticsearch.
Feb 25 01:26:55 ip-172-31-88-104.ec2.internal systemd[1]: Starting Elasticsearch...
Feb 25 01:28:13 ip-172-31-88-104.ec2.internal systemd[1]: Stopping Elasticsearch...
Feb 25 01:28:14 ip-172-31-88-104.ec2.internal systemd[1]: Stopped Elasticsearch.
Feb 25 01:29:38 ip-172-31-88-104.ec2.internal systemd[1]: Started Elasticsearch.
Feb 25 01:29:38 ip-172-31-88-104.ec2.internal systemd[1]: Starting Elasticsearch...
Feb 25 01:31:01 ip-172-31-88-104.ec2.internal systemd[1]: Stopping Elasticsearch...
Feb 25 01:31:01 ip-172-31-88-104.ec2.internal systemd[1]: Stopped Elasticsearch.
Feb 25 01:32:29 ip-172-31-88-104.ec2.internal systemd[1]: Started Elasticsearch.
Feb 25 01:32:29 ip-172-31-88-104.ec2.internal systemd[1]: Starting Elasticsearch...
</pre>
<br />
At this point, you have a running ElasticSearch 6.6.1 single node cluster.Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-91047390122064713722019-02-24T19:36:00.000-05:002019-02-24T19:36:12.827-05:00ECE - Small Trick: Query Zookeeper StatisticsECE stands for "Elastic Cloud Enterprise". It shares most of its codebase with Elastic Cloud. The key tenets of the architecture are:<br />
<ul>
<li>Service-oriented architecture</li>
<li>Containerization using Docker</li>
<li>Deployment state coordination using ZooKeeper</li>
<li>Easy access through the Cloud UI</li>
</ul>
<br />
Since ECE is a service-oriented architecture, it makes scale the platform very easy. Different services can have different reliability and performance requirements, as each service can be scaled separately. This also hides some of technical details from you, especially administrators that curious how it works inside. Sometimes it could be really hard to get information out of a vendor container.<br />
<br />
In our case, we have ECE 1.x and 2.x both installed in our environment, one of challenges we had was how to monitor the Zookeeper status. The Zookeeper status from the admin console doesn't count :). Fortunately, ECE Zookeeper container exposes its port to "0.0.0.0" (i.e, 0.0.0.0:2192->2192/tcp), this means you can can query some of whitelisted information through the host IP and exposed port.<br />
<br />
For example, say one of your Zookeeper role exposes port "2192". If you would like to output a list of variables that could be used for monitoring the health of the cluster. You could query the underline host that runs Zookeeper role like following:<br />
<pre class="bursh:bash">$ echo mntr | nc ecedc1h1.lixu.ca 2192
zk_version 3.4.0
zk_avg_latency 0
zk_max_latency 0
zk_min_latency 0
zk_packets_received 70
zk_packets_sent 69
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count 4
zk_watch_count 0
zk_ephemerals_count 0
zk_approximate_data_size 27
zk_followers 4 - only exposed by the Leader
zk_synced_followers 4 - only exposed by the Leader
zk_pending_syncs 0 - only exposed by the Leader
zk_open_file_descriptor_count 23 - only available on Unix platforms
zk_max_file_descriptor_count 1024 - only available on Unix platforms
</pre>
<br />
Then, if you really want to go fancy, you could send the output to a monitoring and alerting system like "Datadog" peoridically (I might make another post for this). But for information query, this is a nice and easy way.<br />
<br />
<h4>
Our Zookeeper version is v3.5.3, and a few useful commands:</h4>
<br />
<ul>
<li>conf: rint details about serving configuration (not in the whitelist).</li>
<li>cons: List full connection/session details for all clients connected to this server. Includes information on numbers of packets received/sent, session id, operation latencies, last operation performed, etc... (not in the whitelist).</li>
<li>dump: Lists the outstanding sessions and ephemeral nodes. This only works on the leader (not in the whitelist).</li>
<li>envi: Print details about serving environment (not in the whitelist).</li>
<li>ruok: Tests if server is running in a non-error state. The server will respond with imok if it is running. Otherwise it will not respond at all.</li>
<li>A response of "imok" does not necessarily indicate that the server has joined the quorum, just that the server process is active and bound to the specified client port. Use "stat" for details on state wrt quorum and client connection information.</li>
<li>srvr: Lists full details for the server.</li>
<li>stat: Lists brief details for the server and connected clients.</li>
<li>wchs: Lists brief information on watches for the server (not in the whitelist).</li>
<li>wchc: Lists detailed information on watches for the server, by session. This outputs a list of sessions(connections) with associated watches (paths). Note, depending on the number of watches this operation may be expensive (ie impact server performance), use it carefully (not in the whitelist).</li>
<li>wchp: Lists detailed information on watches for the server, by path. This outputs a list of paths (znodes) with associated sessions. Note, depending on the number of watches this operation may be expensive (ie impact server performance), use it carefully.</li>
<li>mntr: Outputs a list of variables that could be used for monitoring the health of the cluster.</li>
</ul>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-71715792404626717702019-02-10T17:00:00.001-05:002019-02-10T17:02:04.642-05:00Jenkins - How To Configure Behind Nginx HTTP/HTTPS ProxyThis blog shows you how to configure Jenkins behind a Nginx proxy server. One typical use case is that you want to add ssl certificate to secure your Jenkins traffic.<br />
<br />
<h4>
Environment:</h4>
<b>Jenkins:</b> v2.155<br />
<b>Nginx:</b> v1.14.1<br />
<br />
Assuming you already have Nginx and Jenkins installed and running. If you don't, I will put up some other posts for how to install Nginx and Jenkins.<br />
<br />
By default, you Jenkins process is running on port 8080, you probably access it via a DNS name, such as:<br />
http://jenkins.lixu.ca:8080<br />
or<br />
https://jenkins.lixu.ca:8080<br />
<br />
This blog will help you configure Jenkins behind both http and https protocols. At the end, you should be able to access your Jenkins server via:<br />
http://jenkins.lixu.ca<br />
and<br />
https://jenkins.lixu.ca<br />
<br />
<h4>
Prepare SSL Certificates for HTTPS</h4>
If you doing HTTPS as well, assume you already have the certificate (could be a self-signed cert). If you don't have it yet, you can check out my other blog <a href="https://tonylixu.blogspot.com/2017/12/nginx-docker-client-side-certificate.html">Creating and Signing Your Certs</a>.<br />
<br />
<b>Define upstream for both HTTP and HTTPS proxy_pass</b><br />
Define a "upstream". An "upstream" could be one or more servers that can be referenced by "proxy_pass" later, this could save you define each servers later for HTTP and HTTPS separately.<br />
<pre class="brush:bash"></pre>
<pre class="brush:bash">upstream jenkins {
server 127.0.0.1:8080 fail_timeout=0;
}
</pre>
<br />
Note: You need to put define this upstream in the "http{}" block in your "nginx.conf".<br />
<br />
<b>Update Your Nginx HTTP Config:</b><br />
You need to update the "server_name" and "location" definitions. For example:
<br />
<pre class="brush:bash">server {
upstream jenkins {
<span style="white-space: pre;"> </span> server 127.0.0.1:8080 fail_timeout=0;
}
<span style="white-space: pre;"> </span>
listen 80;
server_name jenkins.lixu.ca;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;</pre>
<pre class="brush:bash"> # Fix the "It appears that your reverse proxy set up is broken" error.
proxy_pass http://jenkins;
proxy_read_timeout 90;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
# workaround for https://issues.jenkins-ci.org/browse/JENKINS-45651
add_header 'X-SSH-Endpoint' 'jenkins.lixu.ca:50022' always;
}
}
</pre>
<pre class="brush:bash"></pre>
Restart you Nginx process, once it comes back up, you should be able to access your Jenkins UI through: http://jenkins.lixu.ca<br />
<br />
<b>Update Your Nginx HTTPs</b><br />
Config
Similar to HTTP, update "server_name" and "location" definition:<br />
<pre class="brush:bash">server {
listen 443 ssl;
server_name jenkins.lixu.ca;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect http:// https://;
proxy_pass http://jenkins;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off; # Required for HTTP-based CLI to work over SSL
# workaround for https://issues.jenkins-ci.org/browse/JENKINS-45651
add_header 'X-SSH-Endpoint' 'jenkins.lixu.ca:50022' always;
}
}
</pre>
<pre class="brush:bash"></pre>
Restart you Nginx process, once it comes back up, you should be able to access your Jenkins UI through: https://jenkins.lixu.ca
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0New York, NY, USA40.7127753 -74.005972839.9423093 -75.296866299999991 41.483241299999996 -72.7150793tag:blogger.com,1999:blog-23076680.post-72223514868912050042019-01-21T09:00:00.000-05:002019-01-21T09:52:11.223-05:00SLO - Service Level ObjectivesSLO specify a target level for the reliability of your service. SLOs are key to making data-driven decisions about reliability, they are at the core of SRE practices. SLOs are a tool to help determine what engineering work to prioritize. for example, consider the engineering tradeoffs for two tasks: Upgrade to the next version or doing patches of one very critial scheduler software. By calculating the estimated impact on the error budget, we can determine which task os more beneficial.<br />
<br />
<h4>
Why SREs Need SLOs</h4>
In any organizations, engineers are valuable assets and scarce resources. Good qualities engineering time should be invested in the most important characteristics of the most important services. Investing in new features or functionalities will win new customers or users (or retain the existing ones), and investing in the reliability and scalability keep existing customers happy, SLO helps you strike the right balance because SLO is key to making data-informed decisions about the opportunity cost of reliability work, and to determining how to appropriately prioritizie that work.<br />
<br />
In fact, SRE's core responsibilities aren't merely to automate "all the things" and hold the pager, their day-to-day tasks and projects are driven by SLOs:<br />
<br />
<ul>
<li>Ensure SLOs are defended in the short term and can be maintained in the medium to long term.</li>
</ul>
<br />
<br />
<h4>
How To Get Started</h4>
Assuming your current service(s) are fall into one of the following three categories:<br />
<br />
<ul>
<li>A greenfield development, with nothing currently deployed</li>
<li>A system in production with some monitoring to notify you when things go awry, but no formal objectives, no concept of an error budget, and an unspoken goal of 100% uptime</li>
<li>A running deployment with an SLO below 100%, but without a common understanding about its importance or how to leverage it to make continuous improvement choices - An SLO without teeth</li>
</ul>
<br />
<br />
In order to adopt an error budget-based approach to SRE, you need to reach a state with the following hold true:<br />
<br />
<ul>
<li>There are SLOs that all stakeholders in the organization have approved as being fit for the product</li>
<li>The people responsible for ensuring that the service meets its SLO have agreed that it is possible to meet this SLO under normal circumstances</li>
<li>The organization has committed to using the error budget for decision making and prioritizing. This commitment is formalized in an error budget policy.</li>
<li>There is a process in place for refining the SLO.</li>
</ul>
<br />
<br />
<h4>
What an SLO Should Be</h4>
An SLO sets a target level of reliability for the service's customers. Below graph shows you the relationship of SLO threshold and customers:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQdLaym4Hs2afcKuDgkSDDuod37kqFC5AtIQWOTdQz-5IPCujJUiBlxhQ6cd6ySNX91kkWOFt6EIceDt-c7IBT8JBr0aGMeoN8hP7zKWE9tutk88HMTn_GB-KUwtaNqppumUDe4w/s1600/Screen+Shot+2019-01-20+at+4.39.10+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="550" data-original-width="1222" height="288" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQdLaym4Hs2afcKuDgkSDDuod37kqFC5AtIQWOTdQz-5IPCujJUiBlxhQ6cd6ySNX91kkWOFt6EIceDt-c7IBT8JBr0aGMeoN8hP7zKWE9tutk88HMTn_GB-KUwtaNqppumUDe4w/s640/Screen+Shot+2019-01-20+at+4.39.10+PM.png" width="640" /></a></div>
<br />
Ultimately, user happiness is what matters.<br />
<br />
<h4>
SLO of 100% Is The Wrong Target:</h4>
<br />
<ul>
<li>100% reliability is not a reasonable goal. With the most expensive and redundant hardware, automated health checks and the fastest failover, there is a nonzero probability that one or more components will fail simultaneously, resulting <100% reliability.</li>
<li>Customers won't experience 100% reliability, the systems between you and customers are long and complex, and any of these components can fail.</li>
<li>Even if you do manage to create an 100% reliability experience, you can never update or improve your services. Because any change comes with risk, and eventually your service will stagnate and users will go elsewhere.</li>
<li>A SLO of 100% means you only have time to be reactive, not proactive.</li>
</ul>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0New York, NY, USA40.7127753 -74.005972839.9423093 -75.296866299999991 41.483241299999996 -72.7150793tag:blogger.com,1999:blog-23076680.post-54284959858124270802019-01-20T11:51:00.004-05:002019-01-20T11:51:37.674-05:00DevOps vs SRE (Site Reliability Engineer)<h2>
<span style="font-size: x-large;">Background</span></h2>
<h3>
DevOps:</h3>
DevOps is a loose set of practices, guidelines, and culture designed to vreak down silos in IT development, operations, networking and security. The key points of DevOps can be described by a useful acronym: CALMS (Culture, Automation, Learning, Measurement and Sharing)<br />
<br />
<h3>
SRE:</h3>
SRE is more like a job role, a set of practices found to work, and some beliefs that animate those practices. SRE is the implmentation of some of the DevOps philosophy. In a way, class SRE implements interface DevOps.<br />
<br />
<h2>
<span style="font-size: x-large;">Key Ideas</span></h2>
<h4>
DevOps</h4>
<h3>
No More Silos:</h3>
<br />
<ul>
<li>No longer separate operations and development teams</li>
<li>Extreme silozation of knowledge, incentives and lack of collaboration is bad for business</li>
</ul>
<br />
<h3>
Accidents Are Normal:</h3>
<br />
<ul>
<li>Accidents are not just a result of the isolated actions of an individual, but rather result from missing safeguards for when things inevitably go wrong. It is more profitable to focus on speeding recovery than preventing accidents, also building safegurards/guradrails system to minimize the individual incidents.</li>
</ul>
<br />
<h3>
Change Should Be Gradual: </h3>
<br />
<ul>
<li>Change is best when it is small and frequent. Change is risky and the correct response for changes implementation is to split up your changes into smaller subcomponents where possible, then you build a steady pipeline of low-risk change out of regular output from your system. This strategy often leads to CI/CD</li>
</ul>
<br />
<h3>
Tooling and Culture Are Interrelated: </h3>
<br />
<ul>
<li>Tooling is an important component, however organizational culture is the key to success. A good culture can work around broken tooling but the opposite rarely hold.</li>
</ul>
<br />
<h3>
Measure Is Crucial:</h3>
<br />
<ul>
<li>Establish the reality of what's happening by means of objective measurement, verify that you're changing the situation as you expect, and create an objective foundation for conversations that different fuctions agree upon.</li>
</ul>
<br />
<br />
<h4>
SRE</h4>
<h3>
Operation Is a Software Problem: </h3>
<br />
<ul>
<li>SRE shoud use software engineering approaching to solve problems.</li>
</ul>
<br />
<h3>
Manage by Service Level Objectives (SLOs): </h3>
<br />
<ul>
<li>Instead of promise to give everything 100% availability, SRE team should set an appropriate availability target for the service and its user base, and the servie is managed to that SLO.</li>
</ul>
<br />
<h3>
Work to Minimize Toil: </h3>
<br />
<ul>
<li>Any manual, structurally mandated operational task is abhorrent. If a automation task can perform a desired operation, then it often should. Any time spent on operational tasks means time not spent on project work, project work is how we make the services more reliable and scalable.</li>
</ul>
<br />
<h3>
Move Fast by Reducing the Cost of Failure: </h3>
<br />
<ul>
<li>One of the main benefits of SRE engagement is not necessarily increased reliability, it is actually improved product development output. A reduced MTTR (Mean Time To Repair) for faults results in increased product developer velocity, as engineers don't have to waste time on fixing issues.</li>
</ul>
<br />
<h3>
Share Ownership with Developers: </h3>
<br />
<ul>
<li>There should be no rigid boundaries between "dev" and "ops". SRE team shares skill set with development teams, and SRE has paticular expertise around aailability, latency, performance, efficiency, change management, monitoring, emergency response and capacity planning. Both "dev" and "ops" team should have a holistic view of the stack.</li>
</ul>
<br />
<h3>
Use the Same Tooling: </h3>
<br />
<ul>
<li>Teams minding a service should use the same tools, regardless of function or job title. There is no good way to manage a service that has one tool for "dev" and another tool for "ops". The more divergence you have, the less your organization benefits from each effort to tool improvement.</li>
</ul>
<br />
<br />
<h2>
<span style="font-size: x-large;">Conclusion:</span></h2>
<br />
<ul>
<li>Change is necessary in order to improve.</li>
<li>Collaboration is front and center for the work.</li>
<li>Chanage management is best pursued as small, continual actions, the majority of which are ideally both automatically tested and applied.</li>
<li>The right tooling is critically important, API oritentation for system management is a more important philosophy.</li>
<li>Measurement is absolutely key.</li>
<li>Bad things happen occasionally, accept the reality and build a blameless postmortems in order to offset unhelpful, adrenaline-laden reactions.</li>
</ul>
<br />
<br />
In general, DevOps is more like a philosophy and culture, and relatively silent on how to run operations at a detailed level. It is not prescriptive around the precise management of services. It choose instead to concentrate on breaking down barriers in the wider orgnization and this has much value. SRE has relatively narrowly defined responsibilities and its remit is generally service-oriented and end-user oriented, rather than business oriented. It brings an opinionated intellectual framework to the problem of how to run systems effectively. SRE believes in the same things as DevOps but for slightly different reasons.<br />
<div>
<br /></div>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-69702499380065987192019-01-06T19:57:00.000-05:002019-01-06T19:57:07.758-05:00AWS - Configure Rabbitmq Cluster With Auto-scaling Discovery PluginRabbitMQ is an open source message broker software or in simple terms, a queue-manager. Applications or services connect to RabbitMQ, they can then, either add messages/tasks to the queue(publisher), or fetch from the queue(subscriber). A communication platform for microservices, you can say.<br />
<br />
Each running an instance of the RabbitMQ application is a node. A RabbitMQ cluster (or broker) can consist of multiple nodes. ll of the nodes will share configuration information, such as users, exchanges, and queues.<br />
<br />
In this blog, I will show you how to configure RabbitMQ clustering in AWS EC2 with Auto-scaling discovery by using "rabbitmq_peer_discovery_aws" plugin (<a href="https://github.com/rabbitmq/rabbitmq-peer-discovery-aws">https://github.com/rabbitmq/rabbitmq-peer-discovery-aws</a>). There are not many good instructions out there for how to configure auto-cluster discovery, so I decided to make one.<br />
<br />
We will be using Ansible for Rabbitmq server installation, configuration and cluster discovery, so you will need to be familiar with Ansible (<a href="https://www.ansible.com/">https://www.ansible.com/</a>).<br />
<br />
I've created a Github repo called ansible-aws (<a href="https://github.com/tonylixu/ansible-aws">https://github.com/tonylixu/ansible-aws</a>), if you are familiar with Ansible already, you can go ahead and check out how I configure RabbitMQ auto clustering with Ansible. The code itself is pretty self-explanatory, so I will not describe the details, go ahead and read the source code.<br />
<br />
<h3>
<b>Important things:</b></h3>
<br />
<ul>
<li>plugins: You will need rabbitmq_aws, rabbitmq_peer_discovery_aws and rabbitmq_autocluster plugins</li>
<li>Erlang cookie: You need to use the same erlang cookie for all of your RabbitMQ nodes.</li>
<li>Queue mirroring: By default, incoming, ondemand, index and parser queues are mirrored, it is easy to configure mirror for other queues, checkout "create mirror policy" task.</li>
</ul>
<br />
<h3>
<b>To run:</b></h3>
Simply pre-deploy the rabbit role to your EC2 instance, then do a "ansible-playbook -f main.yml" run. When a new RabbitMQ node is initialized, it will auto join the cluster.<br />
<br />
<h3>
Example:</h3>
<b>Clustering:</b><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipOQZjyRcqUCR3vHxQnQgvzsyH_rguwD1ETXaX0F73V722i6fdJDNGAjOJ6l5fHn1wU6tjSez9g3Psz_FoTXJcG5mS1j1Q5TpyAhcDVJEfEa-YkXkqVn1nvpkaPEZQ8KpQI9TKHw/s1600/Screen+Shot+2019-01-06+at+7.48.39+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="529" data-original-width="1313" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipOQZjyRcqUCR3vHxQnQgvzsyH_rguwD1ETXaX0F73V722i6fdJDNGAjOJ6l5fHn1wU6tjSez9g3Psz_FoTXJcG5mS1j1Q5TpyAhcDVJEfEa-YkXkqVn1nvpkaPEZQ8KpQI9TKHw/s640/Screen+Shot+2019-01-06+at+7.48.39+PM.png" width="640" /></a></div>
<b><br /></b>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: left;">
<b>Mirroring:</b></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaCc1VHgDrw2__iEj3MqWwN0u74gam_QXvxGEWa9v2YwpGfAItwuRMgYvtvjd6f9q4jTqB9a-HIYtKxv4fWgJOMl1LHazlfoo8-jBNNmmIWl-O_lJQSLd5qSCKFak-_7IJ120w1w/s1600/Screen+Shot+2019-01-06+at+7.48.55+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="406" data-original-width="816" height="318" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaCc1VHgDrw2__iEj3MqWwN0u74gam_QXvxGEWa9v2YwpGfAItwuRMgYvtvjd6f9q4jTqB9a-HIYtKxv4fWgJOMl1LHazlfoo8-jBNNmmIWl-O_lJQSLd5qSCKFak-_7IJ120w1w/s640/Screen+Shot+2019-01-06+at+7.48.55+PM.png" width="640" /></a></div>
<br />Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-91167842956759439102019-01-01T18:58:00.002-05:002019-01-01T19:02:01.082-05:00Python - Validator Action for argparse ModulePython's argparse module is quite featureful, readble, well-documented and easy to use. Below is an simple example:<br />
<br />
<pre class="brush:bash">import argparse
parser = argparse.ArgumentParser()
parser.add_argument('somearg', help='Some argument')
args = parser.parse_args()
parameter = args.somearg
print("You provided parameter {}!".format(parameter))
</pre>
<br />
and run it.<br />
<pre class="brush:bash">$ python3 test.py
usage: test.py [-h] somearg
test.py: error: the following arguments are required: somearg
$ python3 test.py cat
You provided parameter cat!
</pre>
<br />
The argparse module automatically handles help which is pretty handy<br />
<br />
<pre class="brush:bash">$ python3 test.py -h
usage: test.py [-h] somearg
positional arguments:
somearg Some argument
optional arguments:
-h, --help show this help message and exit
</pre>
<br />
<h3>
Actions:</h3>
You can create a custom action if you want to validate the parameter on the fly! ArgumentParser objects associate command-line arguments with actions. These actions can do just about anything with the command-line arguments associated with them. The action keyword argument specifies how the command-line arguments should be handled. Other than validator, here are list of supplied actions:<br />
<br />
<ul>
<li>store - This just stores the argument’s value</li>
<li>store_const - This stores the value specified by the const keyword argument</li>
<li>store_true and store_false - These are special cases of 'store_const' used for storing the values True and False respectively</li>
<li>append - This stores a list, and appends each argument value to the list</li>
<li>append_const - This stores a list, and appends the value specified by the const keyword argument to the list</li>
<li>count - This counts the number of times a keyword argument occurs</li>
<li>help - This prints a complete help message for all the options in the current parser and then exits</li>
<li>version - This expects a version= keyword argument in the add_argument() call, and prints version information</li>
</ul>
<br />
<br />
This blog only covers customized validators.<br />
<br />
<h3>
Define a validator class:</h3>
<pre class="brush:bash">class ValidateParameter(argparse.Action):
"""Validate parameter"""
def __call__(self, parser, namespace, values, option_string=None):
if not values.isupper():
sys.exit("This parameter must be upper case")
else:
setattr(namespace, self.dest, values)
</pre>
<br />
Then when you add the argument, you can insert the action which calls the "ValidateParameter" we just defined.<br />
parser.add_argument('somearg', action=ValidateParameter, help='Some argument')<br />
<br />
<h3>
Run it again:</h3>
<pre class="brush:bash">$ python3 test.py lowercase
This parameter must be uppercase
$ python3 test.py LOWERCASE
You provided parameter LOWERCASE!
</pre>
<br />
The source code:<br />
<pre class="brush:bash">import argparse
import sys
class ValidateParameter(argparse.Action):
"""Validate parameter"""
def __call__(self, parser, namespace, values, option_string=None):
if not values.isupper():
sys.exit("This parameter must be uppercase")
else:
setattr(namespace, self.dest, values)
parser = argparse.ArgumentParser()
parser.add_argument('somearg', action=ValidateParameter, help='Some argument')
args = parser.parse_args()
parameter = args.somearg
print("You provided parameter {}!".format(parameter))
</pre>
<div>
<br /></div>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-13951242815200424312018-12-15T12:34:00.000-05:002018-12-15T12:46:20.928-05:00AWS EC2 - How To Install Logstash and Output Logs To AWS S3This blog shows you how to install logstash in a AWS EC2 instance and configure it to send sample logs to S3.<br />
<h3>
<b>Environments:</b></h3>
<ul>
<li><b>logstash</b>: 5.6.2</li>
<li><b>AMI: </b>ami-009d6802948d06e52</li>
</ul>
<h3>
<b>Spin Up EC2 Instance:</b></h3>
1. Spin up a EC2 instance from your AWS console, I use t2.large for example. The default disk size, security group settings should be fine. I only has SSH opened for ingress rules. egress has no restriction.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiV607BrIcWoEcPqXNkm8nLjuvVpChwGglrr156EuaaYuUxwyc2_IreA-cabnVJ8Zx2TbD_twgL3wF3uP78UG1EtO2atss40F8CDvrA9qD1zGS34Y4U0-yrBNqwJ10xM_a3sJNb1Q/s1600/logstash-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="178" data-original-width="923" height="122" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiV607BrIcWoEcPqXNkm8nLjuvVpChwGglrr156EuaaYuUxwyc2_IreA-cabnVJ8Zx2TbD_twgL3wF3uP78UG1EtO2atss40F8CDvrA9qD1zGS34Y4U0-yrBNqwJ10xM_a3sJNb1Q/s640/logstash-3.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMthzYuCYrJByClKcmP9OdEVgFtgLGCmc1UsmbrhYfUZE5ps3S075SnADBOfwzv0a053KFpCso29EOYUWdsgszBAEjQyIJxh3GuingD45BaHxXmAnAiBv1PWtY6h1NBpqUm2Q85Q/s1600/logstash-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="183" data-original-width="967" height="120" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMthzYuCYrJByClKcmP9OdEVgFtgLGCmc1UsmbrhYfUZE5ps3S075SnADBOfwzv0a053KFpCso29EOYUWdsgszBAEjQyIJxh3GuingD45BaHxXmAnAiBv1PWtY6h1NBpqUm2Q85Q/s640/logstash-4.png" width="640" /></a></div>
<br />
2. After the instance is up, ssh into the instance.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcfUIlz1-UjXiO0ItTnBPluJCaq7USOyShVxdxBRs74n-V8fVd72WiBl6WCVdEksWJYqMzCeSzmVI48TOHv5XJ07GzzdqeJWR8Y-MbO2P5xftcKp9E_Tu1nM7zfVHgyLIQnPLFFQ/s1600/logstash-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="153" data-original-width="639" height="152" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcfUIlz1-UjXiO0ItTnBPluJCaq7USOyShVxdxBRs74n-V8fVd72WiBl6WCVdEksWJYqMzCeSzmVI48TOHv5XJ07GzzdqeJWR8Y-MbO2P5xftcKp9E_Tu1nM7zfVHgyLIQnPLFFQ/s640/logstash-1.png" width="640" /></a></div>
<br />
<h3>
<b>Install Logstash 5.6.2:</b></h3>
1. Download the Logstash 5.6.2<br />
<pre class="brush:bash">$ cd /root
$ wget https://artifacts.elastic.co/downloads/logstash/logstash-5.6.2.rpm
</pre>
2. Install Java-1.8.0<br />
<pre class="brush:bash">$ yum install java-1.8.0
</pre>
3. Install Logstash<br />
<pre class="brush:bash">$ rpm -iUh logstash-5.6.2.rpm
warning: logstash-5.6.2.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
################################# [100%]
Updating / installing...
################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash
</pre>
<br />
<h3>
<b>Configure Logstash:</b></h3>
1. I've created an working sample "test.conf" below, you will need to create the dummy "/tmp/yum.log" file.<br />
<pre class="brush:bash">$ cat test.conf
input {
file {
path => "/tmp/yum.log"
}
}
output {
s3 {
access_key_id => "money_key"
secret_access_key => "monkey_secret"
region => "us-east-1"
bucket => "tony-logstash-test"
prefix => "test/"
}
}
</pre>
<br />
Note: It is okay to have "/" in your secret<br />
<br />
2. Make sure there is no existing data:<br />
<pre class="brush:bash">$ aws s3 ls s3://tony-logstash-test
</pre>
3. Output the logfile to the S3 bucket:<br />
<pre class="brush:bash">$ sudo /usr/share/logstash/bin/logstash -f test.conf --path.settings=/etc/logstash
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
</pre>
Note: It is important to use sudo here, otherwise you might run into permission erros:
<br />
<pre class="brush:bash">main ERROR FileManager (/var/log/logstash/logstash-plain.log) java.io.FileNotFoundException: /var/log/logstash/logstash-plain.log (Permission denied) java.io.FileNotFoundException:
/var/log/logstash/logstash-plain.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
</pre>
4. Depends on the size of the file, it might take some time. You can also check the Logstash log file for progress:
<br />
<pre class="brush:bash">$ less /var/log/logstash/logstash-plain.log
[2018-12-15T17:27:06,468][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
<span style="white-space: pre;"> </span>...
[2018-12-15T17:27:06,631][INFO ][logstash.pipeline ] Pipeline main started
[2018-12-15T17:27:06,675][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
</pre>
<br />
You should be able to see the output data in your S3 bucket.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhff-9UTTIYf1Ov2ImVGmQMBsa-Dy_fDX6rw7QOdWh6S4Yt0vuKyA06hpei1yyjvhyWWN5ewSs8P_cDgHAuVzbDZBQVu6Ix_1-i14lcusIjIrDYG0paAvjBresdq0fjnY_htluwMQ/s1600/logstash-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="332" data-original-width="631" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhff-9UTTIYf1Ov2ImVGmQMBsa-Dy_fDX6rw7QOdWh6S4Yt0vuKyA06hpei1yyjvhyWWN5ewSs8P_cDgHAuVzbDZBQVu6Ix_1-i14lcusIjIrDYG0paAvjBresdq0fjnY_htluwMQ/s640/logstash-2.png" width="640" /></a></div>
<br />
Once you are fine with the test.conf, you can move it to "/etc/logstash/conf.d" and have Logstash running as a background process permanently.Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-82534038517043391122018-09-16T20:25:00.001-04:002018-09-16T20:28:43.543-04:00Jenkins - How To Configure Role-Based Security Model<b>Jenkins Configuration and OS:</b><br />
OS: CentOS 7.2<br />
Versio: v2.121.3<br />
<br />
I was trying to follow the Jenkins "Role Strategy Plugin" documentation (<a href="https://wiki.jenkins.io/display/JENKINS/Role+Strategy+Plugin">https://wiki.jenkins.io/display/JENKINS/Role+Strategy+Plugin</a>) to setup folder level security access control for our Jenkins instance, but the instruction wasn't very clear and took me a while to figure it out. So I've decided to make a better one with latest screenshots.<br />
<br />
<b>Purpose</b><br />
Isolate folders from certain groups of users and only allow users to access folders that they own in Jenkins.<br />
<br />
<b>Environment</b><br />
•<span style="white-space: pre;"> </span>Role-Based Strategy Plugin (<a href="https://wiki.jenkins.io/display/JENKINS/Role+Strategy+Plugin">https://wiki.jenkins.io/display/JENKINS/Role+Strategy+Plugin</a>)<br />
•<span style="white-space: pre;"> </span>Folder plugin or Folder Plus Plugin<br />
<br />
The "Role-Based Strategy Plugin" combine with "Folder Plugin" to give a powerful solution for managing a Jenkins which is shared by<br />
multiple teams of users. The Jenkins administrator can create folders for each of the teams and then create groups in those folders for each of the roles that<br />
team members can have. By delegating the management of the group membership (but not the management of the roles assigned to groups) to the team<br />
leaders, the Jenkins administrator can empower the team leads to manage the permissions of their team while reducing their own administrative overhead.<br />
<br />
<b>Scenario</b><br />
Let's say we have three different user's groups in Jenkins:<br />
• admin (group) → lxu1 (user)<br />
• team-a (group) → usera (user)<br />
• team-b (group) → userb (user)<br />
"admin" group users will be allowed to access everywhere. Users which belongs to "teama" group will only have access to folder "teama", and users which<br />
belongs to "teamb" will only have access to folder "teamb".<br />
<br />
<b>Configuration Steps:</b><br />
1.<span style="white-space: pre;"> </span>Configure "Active Directory" in "Manage Jenkins" → "Configure Global Security"<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_wTOwmo1g-seVHHmTLj1XK17TFfNIZrn2AIrclsEXPvChL5P4gAz7HvO_vFYCC3FI1mNZHk6HaObHOGZn3ZJ6k8rvtDyK0tpahGdEUhpAvdtosjPDlHaKrVU9q7j4M0W8QSMIEg/s1600/jenkins-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="541" data-original-width="1308" height="264" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_wTOwmo1g-seVHHmTLj1XK17TFfNIZrn2AIrclsEXPvChL5P4gAz7HvO_vFYCC3FI1mNZHk6HaObHOGZn3ZJ6k8rvtDyK0tpahGdEUhpAvdtosjPDlHaKrVU9q7j4M0W8QSMIEg/s640/jenkins-1.png" width="640" /></a></div>
<br />
2.<span style="white-space: pre;"> </span>Enable "Role-Based Strategy"<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_8hFo0A1qbouNxGDiirdi3gd8mI6urtlpjF-EMZSknFFJ9SJByxR69ncc3NQiCETM0eaIV4VsXcCQ5nw3aoXEEZqvTRrcbjDtTYs00Pj72O4PmPgx_Ipy-c_B6uqcEgEEg2aAnQ/s1600/jenkins-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="425" data-original-width="1178" height="230" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_8hFo0A1qbouNxGDiirdi3gd8mI6urtlpjF-EMZSknFFJ9SJByxR69ncc3NQiCETM0eaIV4VsXcCQ5nw3aoXEEZqvTRrcbjDtTYs00Pj72O4PmPgx_Ipy-c_B6uqcEgEEg2aAnQ/s640/jenkins-2.png" width="640" /></a></div>
<br />
3. Define and assign roles by using the Manages and Assign Roles item which appears in the Manage Jenkins screen:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCOLx_BGus3gqhm35Oph5nGnKKSqX8dXsdBnDxwm4Wsi2XinhLveSL4IQMCIf3f2vCSNRzHPEuVWRgQAZ_MXMHSbwgaFb0VMWnsejJ43nHL15e6wKJWmZNeaPpBOwg95IgePhU_w/s1600/jenkins-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="134" data-original-width="1147" height="73" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCOLx_BGus3gqhm35Oph5nGnKKSqX8dXsdBnDxwm4Wsi2XinhLveSL4IQMCIf3f2vCSNRzHPEuVWRgQAZ_MXMHSbwgaFb0VMWnsejJ43nHL15e6wKJWmZNeaPpBOwg95IgePhU_w/s640/jenkins-3.png" width="640" /></a></div>
<br />
4.<span style="white-space: pre;"> </span>Define Global Roles:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcTLona4Ne5NcYxZIsUCx2CiuZiWwj2BT4qNEQn-YS5Ng4b8Kn8MCOYEjUPTjLM1AZePH-CBkul-iiCihDir4-M1a61ygLkFGbXX-nzb7hb1A4SNxC7qv3STIy6B9QFaXLEqLFGQ/s1600/jenkins-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="148" data-original-width="1309" height="72" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcTLona4Ne5NcYxZIsUCx2CiuZiWwj2BT4qNEQn-YS5Ng4b8Kn8MCOYEjUPTjLM1AZePH-CBkul-iiCihDir4-M1a61ygLkFGbXX-nzb7hb1A4SNxC7qv3STIy6B9QFaXLEqLFGQ/s640/jenkins-4.png" width="640" /></a></div>
<br />
5.<span style="white-space: pre;"> </span>Define Project Roles:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcoxCVIY5tqnJp5e-jMrtPZW5sKxE_mTDTpwGE9PesCeEEGXB2_IbhGrCWMTi_OwYmjej70KW8gxsXmkBXV74axq9y4Ufsh34xRDWGAfzdEBHVrsidNtgVPtXqkyALZH7Uu-gCVA/s1600/jenkins-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="350" data-original-width="1310" height="169" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcoxCVIY5tqnJp5e-jMrtPZW5sKxE_mTDTpwGE9PesCeEEGXB2_IbhGrCWMTi_OwYmjej70KW8gxsXmkBXV74axq9y4Ufsh34xRDWGAfzdEBHVrsidNtgVPtXqkyALZH7Uu-gCVA/s640/jenkins-5.png" width="640" /></a></div>
<b>Notes:</b><br />
The only tricky field is the Pattern one. This field consists in a regular expression aimed at matching the full name (including the folder name, if you're using Cloudbees Folders Plugin) of the jobs which the role will apply to. For example, if you set the field to "Roger-.*", then the role will match all jobs which name starts with "Roger-". Note that the pattern is case-sensitive. To perform a case-insensitive match, use (?i) notation: upper, "Roger-.*" vs. lower, "roger-.*" vs. case-insensitive, "(?i)roger-.*" (Not work at job level however, for example "ROGER-A/roger-a-job1" will not be displayed). If you have a nested folder structure where you want to provide the particular access to the second folder (or deeper), consider having a two-level security structure as well (Say you want to provide exclusive write/ modify type access to foo/bar and not everything else under "foo": First, assign that user/ group to read/ discover permissions with pattern " ^foo.* ", then assign that same user/ group to the more particular permissions with pattern " ^foo/bar.* " - Similar to what you'd do in a Unix/ Linux environment.<br />
<br />
6.<span style="white-space: pre;"> </span>Go to Assign Roles:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDoU25Mhy0ALbaWTC3tmPOeo4YuP16tUVTP7ojJbsFEQSmimPmlIQrLALNQ4ZqwWzWuXaacBNaLjYjNuWtpjLzTzWTOdO_vnrql8a7KnNPtITRHXCptg214XmjViCxbxJRZboKAw/s1600/jenkins-6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="598" data-original-width="1123" height="212" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDoU25Mhy0ALbaWTC3tmPOeo4YuP16tUVTP7ojJbsFEQSmimPmlIQrLALNQ4ZqwWzWuXaacBNaLjYjNuWtpjLzTzWTOdO_vnrql8a7KnNPtITRHXCptg214XmjViCxbxJRZboKAw/s400/jenkins-6.png" width="400" /></a></div>
<br />
7.<span style="white-space: pre;"> </span>Assign the following roles:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-ec5ryKDHkGTnCr3GrS2rg4wHcvEZ5RoSFhsJma6nxxggziC4rPPptkKs2n_FPEId4-6zW3aftZUyXkiEWOoSzmO_hyphenhyphend7bVkbgr1qaL2P7KqLCOQwYqXYGZAXBgnXkjWefOfRmQ/s1600/jenkins-7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="532" data-original-width="729" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-ec5ryKDHkGTnCr3GrS2rg4wHcvEZ5RoSFhsJma6nxxggziC4rPPptkKs2n_FPEId4-6zW3aftZUyXkiEWOoSzmO_hyphenhyphend7bVkbgr1qaL2P7KqLCOQwYqXYGZAXBgnXkjWefOfRmQ/s320/jenkins-7.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZYUzW1zE2SlbyQHncVuT_544pkTyZyyzBIVioStd7R7OmRvhSAPVqKdpCgXYOh4jW_q6Ias3VpltmafHNFJU6sEkIWELN11DC2-88WZ3m154tlhDAN-7waUnSaH2ofwZ2xT1HYQ/s1600/jenkins-8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="287" data-original-width="1179" height="154" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZYUzW1zE2SlbyQHncVuT_544pkTyZyyzBIVioStd7R7OmRvhSAPVqKdpCgXYOh4jW_q6Ias3VpltmafHNFJU6sEkIWELN11DC2-88WZ3m154tlhDAN-7waUnSaH2ofwZ2xT1HYQ/s640/jenkins-8.png" width="640" /></a></div>
Note: You should add "usera" and "userb" to "overall-read" and team's corresponding Item Roles<br />
<br />
8. Predefine two folders "teama" and "teamb", also define two random jobs inside each folder.<br />
<br />
9. Now log into Jenkins instance as "usera" and "userb", after login, "usera" can only see folder "teama" and jobs defined inside "teama", and "userb" can only see folder "teamb" and jobs inside it.Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-47926254458627560882018-07-08T12:08:00.001-04:002018-07-08T12:11:03.804-04:00Pip Install SSL Error - CERTIFICATE_VERIFY_FAILEDIf you are doing a "pip install package" and getting the following errors:<br />
<pre class="brush:bash">Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None))
after connection broken by 'SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAIL
ED] certificate verify failed (_ssl.c:777)'),)': /simple/pyaml/
Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None))
after connection broken by 'SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAIL
ED] certificate verify failed (_ssl.c:777)'),)': /simple/pyaml/
</pre>
<br />
<h3>
<b>What happened?</b></h3>
<div>
<b><br /></b></div>
PyPI just switched something in the backend. Before the rollout of pypi.org, the only hostname necessary to interact with the index was pypi.python.org. Now you must be able to connect (over TLS) to pypi.org and files.pythonhosted.org. Files hosting was moved to its own domain during the migration.<br />
<br />
<b>
Before</b><br />
Previously simple index calls to pypi.python.org responded with relative URLs on the existing service<br />
<pre class="brush:bash"><a href="../../packages/bb/69/a9fb8adbbc0a7b..........
</pre>
<br />
<b>
After</b><br />
Now simple index calls to pypi.org respond with absolute URLS to the files service.<br />
<pre class="brush:bash"><a href="https://files.pythonhosted.org/packages/bb/69/a9fb8adbbc0a7b.........</pre>
<br />
<h3>
Here are couple of ways to fix it:</h3>
<div>
<br /></div>
<b>
No Proxy Server:</b><br />
If you are not using a proxy, you can install the "certifi" package and upgrade your pip to 10.0.x as a workaround.<br />
or<br />
<pre class="brush:bash">$ pip install pyaml --trusted-host pypi.python.org --trusted-host \</pre>
<pre class="brush:bash">files.pythonhosted.org --trusted-host pypi.org
</pre>
<br />
<b>
With Proxy Server:</b><br />
Your corporate proxy server probably doesn't allow traffic to pypi.org and/or files.pythonhosted.org. You will need to ask your network team to open traffic for<br />
<ul>
<li><b>pypi.org ("PyPI"): </b>This serves the web UI, /simple index, JSON documents, and other APIs</li>
<li><b>files.pythonhosted.org ("PyPI Files Hosting"):</b> This serves packages uploaded to PyPI</li>
<li><b>pypi.python.org ("Legacy"):</b> Effectively a massive redirect service now, redirecting requests to the appropriate new location on pypi.org or files.pythonhosted.org</li>
</ul>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-24133617325106480602018-07-08T11:53:00.004-04:002018-07-08T11:53:58.006-04:00Python3 - How To Add Package Path To Sys PathFirst of all, let's take a look at how Python finds its modules:<br />
<br />
Strictly taken, a module is a single python file, while a package is a folder containing python files, accompanied by a (can be empty) file named __init__.py, to tell python it is a package to import modules from. In both cases, modules need their .py extension. By default, Python looks for its modules and packages in $PYTHONPATH.<br />
<br />
To find out what is your $PYTHONPATH:<br />
<pre class="brush:bash">$ echo $PYTHONPATH
</pre>
<br />
To find out what is included in $PYTHONPATH, run the following code in python:<br />
<pre class="brush:bash">import sys
print(sys.path)
</pre>
<br />
Now let's talk about how to add a package into your $PYTHONPATH:<br />
There are two ways of doing it, through the python file iteslf, or update the $PYTHONPATH.<br />
<br />
<b>Within a Python file:</b><br />
From within a python file, you can add path(s) incidentally to the default path by adding the following lines in the head section of your python application or script:<br />
<br />
<pre class="brush:bash">import sys
sys.path.insert(0, "/path/to/your/package_or_module")
# Now you can import
And I can simply import the file test.py by:
import test
</pre>
<br />
<b>Update the PYTHONPATH:</b><br />
Suppose you have a package called my_package in /home/myname/pythonfiles, to add the /home/myname/pythonfiles path to your $PYTHONPATH, you need to:<br />
<pre class="brush:bash">export PYTHONPATH=$PYTHONPATH:/home/myname/pythonfiles
</pre>
<br />
Now you should be able to import modules from "my_package"Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-83257733541231153262018-07-07T15:12:00.002-04:002018-07-07T15:12:53.766-04:00CetnOS7 - How to Install Git 1.9The latest version of Git from the default CentOS7 repository is 1.8, to upgrade your git version to 1.9, you will need to install it from the Centos pkgs.<br />
<br />
<h3>
<b>Install CentOS SCLo RH Testing repository:</b></h3>
<pre class="brush:bash">$ sudo yum install
Install git19 rpm package:
$ yum install git19
</pre>
<br />
Then, you will need to symlink the latest version:<br />
<pre class="brush:bash">$ sudo mv /usr/bin/git /usr/bin/git-old
$ sudo ln -s /opt/rh/git19/root/usr/bin/git /usr/bin/git
</pre>
<br />
Test it out:<br />
<pre class="brush:bash">$ git --version
git version 1.9.4
</pre>
<br />
<h3>
<b>Or you can install it from the source:</b></h3>
Before you begin, you'll need to install the software that git depends on. These dependencies are all available in the default CentOS repositories, along with the tools that we need to build a binary from source:<br />
<br />
<pre class="brush:bash">$ sudo yum groupinstall "Development Tools"
$ sudo yum install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel
</pre>
<br />
Download the git version from <a href="https://github.com/git/git/releases">git releases</a>:<br />
<pre class="brush:bash">$ wget https://github.com/git/git/archive/v1.9.2.tar.gz -O git.tar.gz
</pre>
<br />
Unpack it:<br />
<pre class="brush:bash">$ tar -zxf git.tar.gz
</pre>
<br />
Configure and Install:<br />
<pre class="brush:bash">$ make configure
$ ./configure --prefix=/usr/local
$ sudo make install
</pre>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-31225137563146101682018-05-30T14:34:00.004-04:002018-05-30T14:34:45.439-04:00Elastic Cloud Enterprise - How To Disable Kibana Node Creation On A Allocator<h3>
<b>User case:</b></h3>
We have an ECE cluster which has 3 different availablilty zones. One of the AZ is just to server a "TieBreaker" Zone and it has very low hardware configurations. We would like to disable the Kibana node creation on this host.<br />
<br />
<h3>
<b>The API command:</b></h3>
<pre class="brush:bash">$ curl -u root:password -X POST -H 'Content-Type: application/json' https://<ece-root>/api/v0/regions/ece-regions/allocators/<availability-zone>/<allocator>/_update -d '{"overrides":{"features":{"kibana":false}}}'
</pre>
<pre class="brush:bash">Example:
$ curl -u root:password -X POST -H 'Content-Type: application/json' https://10.0.0.2:12400/api/v0/regions/ece-regions/allocators/region-1/ecehost001/_update -d '{"overrides":{"features":{"kibana":false}}}'
</pre>
<br />
<h3>
<b>Note:</b></h3>
This update will override the previous settings, so if you want to keep the previous settings you will have to add them into the payload.Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-50488754731662081372018-05-25T07:22:00.002-04:002018-05-25T07:22:53.707-04:00CentOS7.2 - How To Run Jenkins Under Different UserAfter successfully install Jenkins, you can set which user you want Jenkins service to run under.<br />
<br />
<pre class="brush:bash">$ sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
$ sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
$ yum install jenkins
</pre>
<br />
The correct way to do this is to update the "/etc/sysconfig/jenkins" file, instead of change the JENKINS_USER variable in "/etc/init.d/jenkins".<br />
<br />
To change the jenkins user, open the /etc/sysconfig/jenkins (in debian this file is created in /etc/default) and change the JENKINS_USER to whatever you want. You have to make sure that user exists in the system (you can check the user in the /etc/passwd file ).<br />
$JENKINS_USER="tony"<br />
Then change the ownership of the Jenkins home, Jenkins webroot and logs.<br />
<pre class="brush:bash">$ chown -R tony:tony /var/lib/jenkins
$ chown -R tony:tony /var/cache/jenkins
$ chown -R tony:tony /var/log/jenkins
</pre>
Then restarted the Jenkins jenkins and check the user has changed using a ps command<br />
<pre class="brush:bash">$ /etc/init.d/jenkins restart
$ ps -ef | grep jenkins
</pre>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-45889917076729045742018-05-06T20:18:00.002-04:002018-05-06T20:18:53.933-04:00CentOS 7.2 - Puppet Labs PC1 Repository el 7 - x86_64 Already InstalledRecently when I try to do a normal "yum update" on one of my virtual box CentOS 7.2, I got the following error:<br />
<br />
"The GPG keys listed for the "Puppet Labs PC1 Repository el 7 - x86_64" repository are already installed but they are not correct for this package. Check that the correct key URLs are configured for this repository."<br />
<br />
This is due to a recent key change for the puppetlabs repo (https://groups.google.com/forum/#!topic/puppet-announce/-DiBwr51qhM).<br />
<br />
To fix this, just download the new key and overwrite the exisitng key.<br />
<br />
<h4>
<b>Download the correct key file:</b></h4>
<pre class="brush:bash">$ curl --remote-name --location https://yum.puppetlabs.com/RPM-GPG-KEY-puppet
</pre>
<br />
<h4>
<b>Verify the key:</b></h4>
<pre class="brush:bash">$ gpg --with-fingerprint RPM-GPG-KEY-puppet
gpg: directory `/root/.gnupg' created
gpg: new configuration file `/root/.gnupg/gpg.conf' created
gpg: WARNING: options in `/root/.gnupg/gpg.conf' are not yet active during this run
gpg: keyring `/root/.gnupg/secring.gpg' created
gpg: keyring `/root/.gnupg/pubring.gpg' created
pub 4096R/EF8D349F 2016-08-18 Puppet, Inc. Release Key (Puppet, Inc. Release Key)
Key fingerprint = 6F6B 1550 9CF8 E59E 6E46 9F32 7F43 8280 EF8D 349F
</pre>
<br />
<h4>
<b>Import the new key:</b></h4>
<pre class="brush:bash">$ gpg --keyid-format 0xLONG --with-fingerprint ./RPM-GPG-KEY-puppet
pub 4096R/0x7F438280EF8D349F 2016-08-18 Puppet, Inc. Release Key (Puppet, Inc. Release Key) <rel...@puppet.com>
Key fingerprint = 6F6B 1550 9CF8 E59E 6E46 9F32 7F43 8280 EF8D 349F
sub 4096R/0xA2D80E04656674AE 2016-08-18 [expires: 2021-08-17]
$ rpm --import RPM-GPG-KEY-puppet
</pre>
Now you can enjoy your "yum update".Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-80582442399016236872018-04-13T10:53:00.000-04:002018-04-13T10:56:28.020-04:00Set Up OpenVPN Server in AWS EC2 Using Ansible and DockerIn this tutorial, I will show you step by step, how to setup your own OpenVPN server in a AWS EC2 instance using Ansible and Docker containers.<br />
<br />
The idea/flow is simple, suppose you have a running EC2 instance, we will perform the following two steps to setup an OpenVPN server in this instance:<br />
<br />
<ol>
<li>Use Ansible playbook to install "Docker" and "Docker compose"</li>
<li>Pull the kylemanna's openvpn docker image and use "Docker Compose" to run and manage the service.</li>
</ol>
<br />
At the end of this tutorial, you should have a running OpenVPN container service.<br />
<br />
<h4>
Requirements:</h4>
<ul>
<li>Ansible: 2.4.3.0+</li>
<li>Docker: 17.12.1-ce+</li>
<li>Docker compose: 1.21.0</li>
</ul>
<div>
<br /></div>
<h4>
Install "Docker" and "Docker Compose"</h4>
Go to the following repository, follow the "README.md" (<a href="https://github.com/tonylixu/ansible/blob/master/playbooks/README.md">https://github.com/tonylixu/ansible/blob/master/playbooks/README.md</a>) instructions.<br />
You will need to have a running AWS EC2 instance.<br />
The "task" yaml file:<br />
<pre class="brush:bash">---
- name: Configure / Update yum packages
yum:
name: '*'
state: latest
update_cache: yes
- name: install docker
yum:
name='docker'
state=latest
- name: service docker
service:
name=docker
state=started
enabled=yes
- name: Install Docker Compose
get_url:
url: "https://github.com/docker/compose/releases/download/{{ docker_compose_version }}/docker-compose-Linux-x86_64"
dest: "/usr/local/bin/docker-compose"
force: True
owner: "root"
group: "root"
mode: "0755"
</pre>
<br />
The "defaults" yaml file:<br />
<pre class="brush:bash">---
docker_compose_version: "1.21.0"
</pre>
<br />
Now you should have both "docker" and "docker-compose" installed on the instance.<br />
<h4>
<br /></h4>
<h4>
Install "openvpn" Container</h4>
Log into the EC2 instance, I put everything inside "/var/docker-data" as a personal habit. Create a "/var/docker-data/openvpn" directory, go into the newly created directory:<br />
<ul>
<li>Download the "docker-compose.yml" from "<a href="https://github.com/tonylixu/docker-compose/blob/master/containers/openvpn/docker-compose.yml">https://github.com/tonylixu/docker-compose/blob/master/containers/openvpn/docker-compose.yml</a>"</li>
<li>Follow the set up instructions: "<a href="https://github.com/tonylixu/docker-compose/blob/master/containers/openvpn/README.md">https://github.com/tonylixu/docker-compose/blob/master/containers/openvpn/README.md</a>"</li>
</ul>
<br />
The setup instruction is pretty self-explanatory.<br />
You can check if the container service is running by:<br />
<pre class="brush:bash">docker-compose ps
Name Command State Ports
---------------------------------------------------
openvpn ovpn_run Up 0.0.0.0:1194->1194/udp
</pre>
<br />
<h4>
Notes:</h4>
Remember to open the port 1194 (or whatever port you use) in the instance's security group.Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0tag:blogger.com,1999:blog-23076680.post-72139718897797401022018-04-02T22:23:00.001-04:002018-04-02T22:34:18.839-04:00Docker - How to Containerize Zookeeper with ExhibitorThis blog provides your a way of containerizing Zookeeper + Exhibitor. It shows you step by step, how to spin up Zookeeper instance(s) that supervised by Exhibitor. A working "docker-compose.yml" also provided.<br />
<br />
<h4>
<b>Requirements:</b></h4>
<ul>
<li><b>Zookeeper: </b>3.4.11</li>
<li><b>Exhibitor:</b> latest</li>
<li><b>Maven: </b>3.5.3</li>
<li><b>Docker: </b>17.12.0-ce</li>
<li><b>Docker Compose: </b>Version 3</li>
</ul>
<div>
<br /></div>
<div>
<h4>
<b>Repositories:</b></h4>
<div>
<ul>
<li><a href="https://github.com/tonylixu/docker-compose/tree/master/containers/zookeeper/exhibitor">Github</a></li>
<li><a href="https://hub.docker.com/r/tonylixu/ex-zookeeper/">Docker Hub</a></li>
</ul>
<div>
<br /></div>
</div>
</div>
<div>
<h4>
<b>Dockerfile:</b></h4>
<div>
<ul>
<li><a href="https://github.com/tonylixu/docker-compose/blob/master/containers/zookeeper/exhibitor/Dockerfile">Dockerfile</a></li>
</ul>
<div>
<br /></div>
</div>
<div>
<h4>
<b>Environment variables:</b></h4>
</div>
<div>
The container expects the following environment variables to be passed in:</div>
<div>
<ul>
<li><b>HOSTNAME </b>- addressable hostname for this node (Exhibitor will forward users of the UI to this address)</li>
<li><b>S3_BUCKET</b> - (optional) bucket used by Exhibitor for backups and coordination</li>
<li><b>S3_PREFIX </b>- (optional) key prefix within S3_BUCKET to use for this cluster</li>
<li><b>AWS_ACCESS_KEY_ID</b> - (optional) AWS access key ID with read/write permissions on S3_BUCKET</li>
<li><b>AWS_SECRET_ACCESS_KEY</b> - (optional) secret key for AWS_ACCESS_KEY_ID</li>
<li><b>AWS_REGION</b> - (optional) the AWS region of the S3 bucket (defaults to us-west-2)</li>
<li><b>ZK_PASSWORD</b> - (optional) the HTTP Basic Auth password for the "zk" user</li>
<li><b>ZK_DATA_DIR</b> - (optional) Zookeeper data directory</li>
<li><b>ZK_LOG_DIR</b> - (optional) Zookeeper log directory</li>
<li><b>AUTO_MANAGE_SETTLING_PERIOD</b> - (optional) The amount in milliseconds for Exhibitor to wait before adding/removing nodes</li>
<li><b>HTTP_PROXY_HOST </b>- (optional) HTTP Proxy hostname</li>
<li><b>HTTP_PROXY_PORT</b> - (optional) HTTP Proxy port</li>
<li><b>HTTP_PROXY_USERNAME</b> - (optional) HTTP Proxy username</li>
<li><b>HTTP_PROXY_PASSWORD</b> - (optional) HTTP Proxy password</li>
</ul>
<div>
<br /></div>
</div>
<h4>
<b>Run Container:</b></h4>
<b> 1.With AWS s3 bucket:</b><br />
<div>
<pre class="brush:bash">$ docker run -p 8181:8181 -p 2181:2181 -p 2888:2888 -p 3888:3888 \
-e S3_BUCKET=<bucket> \
-e S3_PREFIX=<key_prefix> \
-e AWS_ACCESS_KEY_ID=<access_key> \
-e AWS_SECRET_ACCESS_KEY=<secret_key> \
-e HOSTNAME=<host> \
tonylixu/ex-zookeeper:1.0
</pre>
</div>
</div>
<br />
<div>
2. <b>With local file system:</b></div>
<div>
<pre class="brush:bash">$ docker run -p 8181:8181 -p 2181:2181 -p 2888:2888 -p 3888:3888 \
-e HOSTNAME=<host> \
tonylixu/ex-zookeeper:1.0</pre>
</div>
<div>
<h4>
<b><br /></b></h4>
<h4>
<b>Run with Docker Compose:</b></h4>
</div>
<div>
Download the "<a href="https://github.com/tonylixu/docker-compose/blob/master/containers/zookeeper/exhibitor/docker-compose.yml">docker-compose.yml</a><span id="goog_226237596"></span><a href="https://www.blogger.com/"></a><span id="goog_226237597"></span>" file. If you don't want to build your own image, comment out the "build: ."</div>
<div>
To start a single instance:</div>
<div>
<pre class="brush:bash">$ docker-compose up -d</pre>
</div>
<div>
</div>
<br />
<div>
To run multiple containers, you can use the --scale option:</div>
<div>
<pre class="brush:bash">$ docker-compose up --scale zookeeper=3 -d</pre>
</div>
<div>
</div>
<br />
<div>
The will create three zookeeper containers, each with its own exhibitor. Give it a minute or two to let nodes get balanced out and recognize all nodes. This is only recommended for dev/test environment, for production deploy, I strongly suggest that you create separate services in the "docker-compose.yml" file.<br />
<b><span style="color: red;">Note: the local file system backup does not work in the "scale" mode, please create separate services in "docker-compose.yml" file for production env.</span></b></div>
<div>
<br /></div>
<div>
<h4>
<b>What is Zookeeper?</b></h4>
</div>
<div>
Apache ZooKeeper is a software project of the Apache Software Foundation, providing an open source distributed configuration service, synchronization service, and naming registry for large distributed systems. Co-ordinating and managing a service in a distributed environment is a complicated process. ZooKeeper solves this issue with its simple architecture and API. ZooKeeper allows developers to focus on core application logic without worrying about the distributed nature of the application.</div>
<div>
<br /></div>
<div>
The ZooKeeper framework was originally built at “Yahoo!” for accessing their applications in an easy and robust manner. Later, Apache ZooKeeper became a standard for organized service used by Hadoop, HBase, and other distributed frameworks. Now Zookeeper is a top-level Apache project.<br />
<br /></div>
<div>
<h4>
What services does Zookeeper provide?</h4>
</div>
<div>
Apache ZooKeeper is a service used by a cluster (group of nodes) to coordinate between themselves and maintain shared data with robust synchronization techniques.<br />
<br /></div>
<div>
The common services provided by ZooKeeper are as follows:</div>
<div>
<ul>
<li><b>Naming service </b>− Identifying the nodes in a cluster by name. It is similar to DNS, but for nodes.</li>
<li><b>Configuration management </b>− Latest and up-to-date configuration information of the system for a joining node.</li>
<li><b>Cluster management</b> − Joining / leaving of a node in a cluster and node status at real time.</li>
<li><b>Leader election</b> − Electing a node as leader for coordination purpose.</li>
<li><b>Locking and synchronization service</b> − Locking the data while modifying it.</li>
<li><b>Highly reliable data registry </b>− Availability of data even when one or a few nodes are down.</li>
</ul>
</div>
<div>
<br /></div>
<div>
<h4>
<b>Benefits of Zookeeper:</b></h4>
</div>
<div>
<ul>
<li>Simple distributed coordination process</li>
<li>Synchronization − Mutual exclusion and co-operation between server processes. This process helps in Apache HBase for configuration management.</li>
<li>Ordered Messages</li>
<li>Serialization − Encode the data according to specific rules. Ensure your application runs consistently. This approach can be used in MapReduce to coordinate queue to execute running threads.</li>
<li>Reliability</li>
<li>Atomicity − Data transfer either succeed or fail completely, but no transaction is partial.</li>
</ul>
</div>
<div>
<br /></div>
<div>
<h4>
<b>What is Exhibitor?</b></h4>
</div>
<div>
Exhibitor is a supervisor of the Zookeeper instances. Exhibitor does periodic backups, checking nodes status and auto restart on Zookeeper node failures.<br />
<br /></div>
<div>
<h4>
<b>Exhibitor features:</b></h4>
</div>
<div>
<b>Zookeeper instance monitoring:</b></div>
<div>
<ul>
<li>Each Exhibitor instance monitors the ZooKeeper server running on the same server. If ZooKeeper is not running, Exhibitor will write the zoo.cfg file (see Cluster-wide Configuration below) and start it. If ZooKeeper crashes for some reason, Exhibitor will restart it.</li>
</ul>
<br /></div>
<div>
<b>Backup/Restore:</b></div>
<div>
<ul>
<li>Backups in a ZooKeeper ensemble are more complicated than for a traditional data store (e.g. a RDBMS). Generally, most of the data in ZooKeeper is ephemeral. It would be harmful to blindly restore an entire ZooKeeper data set. What is needed is selective restoration to prevent accidental damage to a subset of the data set. Exhibitor enables this.</li>
<li>Exhibitor will periodically backup the ZooKeeper transaction files. Once backed up, you can index any of these transaction files. Once indexed, you can search for individual transactions and “replay” them to restore a given ZNode to ZooKeeper.</li>
</ul>
</div>
<div>
<b>Log Cleanup:</b></div>
<div>
<ul>
<li>Exhibitor does this maintenance automatically.</li>
</ul>
<div>
If you have any questions, please send me an email at tony@lixu.ca</div>
</div>
Tony Li Xuhttp://www.blogger.com/profile/10146779495057607642noreply@blogger.com0