Sunday, February 3, 2013

BT download use Raspberry Pi

install Transmission

$ sudo apt-get install transmission-daemon

$ cd /etc/transmission-daemon
$ sudo cp settings.json settings.json.sbak
$ sudo vi settings.json
"download-dir": "/home/oopsmonk/BT-Download"
"rpc-whitelist": "*.*.*.*"
"rpc-username": "oopsmonk"
"rpc-password": "web-login-pwd"

change permission on download folder.
$ chmod 777 /home/oopsmonk/BT-Download

The torrent file location


WebGUI use HTTP port

$ sudo install nginx
$ sudo vi /etc/nginx/sites-available/default
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default_server ipv6only=on; ## listen for ipv6
        location /transmission {
$ sudo service nginx restart


Setting Up Transmission’s Web Interface
Linux防健忘日誌No.69-Ubuntu 12.04 安裝及設定transmission-daemon

OpenNMS Architecture Introduction (Discovery & Monitor)

O.S. : Ubuntu12.04 LTS
OpenNMS Version : 1.10.7

OpenNMS base on TMN & FCAPS network management models.
OpenNMS Block Diagram

Discovery & Monitor daemons
Event handling daemon
Configuration files:
eventconf.xml -> Defines the UEI (Universal Event Identifiers).
eventd-configuration.xml -> Defines operating parameters for eventd such as timeouts, listener threads and listener port.
events-archiver-configuation.xml -> Configuration for event archive daemon. -> Fine tune events archive subsystem.
etc/events/*.xml -> Vendor UEI define files.
Listening "eventsConfigChange" event.

Discovery (discovery-configuration.xml)
Discovery service implement the Singleton pattern.
Listening events: discPause, interfaceDeleted, discResume, nodeGainedInterface, discoveryConfigChange and reloadDaemonConfig.

Capsd (Capabilities daemon, capsd-configuration.xml)
Notified by the discovery process when a new node is discovered, the polls for all the capabilities for this node and loading the data collected into the database.
Listening events: deleteService, changeService, deleteInterface, newSuspect, froceRescan, addInterface, nodeDeleted, addNode, updateServer, nodeAdded, duplicateNodeDeleted, deleteNode and updateService.

Collectd (collectd-configuration.xml)  
Responsible for gathering and storing data from various sources, including SNMP, JMX, HTTP and NSClient.
Listening events: nodeGainedService, primarySnmpInterfaceChanged, reinitializePrimarySnmpInterface, interfaceReparented, nodeDeleted, duplicateNodeDeleted, interfaceDeleted, serviceDeleted, schedOutagesChanged, configureSNMP, thresholdConfigChange, reloadDaemonConfig and nodeCategoryMembershipChanged.

Poller (poller-configuration.xml)
Polling services, including ICMP, DNS, FTP, HTTP, HTTPS, SSH, MySQL....
Listening events: nodeGaineService, serviceDeleted, interfaceReparented, nodeDeleted, nodeLabelChanged, duplicateNodeDeleted, interfaceDeleted, suspendPollingService, resumePollingService, schedOutagesChanged, demandPollService, thresholdConfigChange, assetInfoChanged and nodeCategoryMembershipChanged.

RTC (Real-Time Collector)
The RTC initializes its data from the database when it comes up then subscribes to the events subsystem to receive events of interest to keep the data up-to-date.
Listening events: nodeGainedService, nodeLostService, interfaceDown, nodeDown, nodeUp, nodeCategoryMembershipChanged, interfaceUp, nodeRegainedService, serviceDeleted, serviceunmanaged, interfaceReparented, subscribe, unsubscribe and assetInfoChanged.

There are two major ways that OpenNMS gathers data about the network.
The first is through polling. Processes called monitors connect to a network resource and perform a simple test to see if the resource is responding correctly. If not, events are generated.
The second is through data collection using collectors. Currently, the only collector is for SNMP data.
Collectd record SNMP data to RRDTool in /share/rrd/snmp/NodeID/*,  Ex:  tcpOutSegs.jrb, icmpInEchos.jrb, tcpInSegs.jrb, ifInOctets.jrb, ifoutOctets.jrb...
Poller record Service data to RRDTool in /share/rrd/response/IP/*,  Ex: icmp.jrb ssh.jrd...

OpenNMS configuration files:

Discovery & Monitor Flow
Here is the event flow then press "Save and Restart Discovery" button on WebGUI.
Figure 1

Figure 2

References :

Saturday, February 2, 2013

Create S3 on AWS use s3cmd

1. open S3 web console

2. Create Bucket
You can use any names for your objects, but bucket names must be unique across all of Amazon S3. 
Objects stored in Amazon S3 are addressable using the REST API under the domain 
For example, if the object homepage.html is stored in the Amazon S3 bucket mybucket its address would be
For more information, see Virtual Hosting of Buckets

Install s3cmd
download s3cmd from

install on Ubuntu 12.04 
$ sudo python install

configure s3cmd
$ s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Secret Key:

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: PWD_WHAT_U_WANT
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: Yes

New settings:
  Access Key:
  Secret Key:
  Encryption password: PWD_WHAT_U_WANT
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n]
Please wait, attempting to list all buckets...
Success. Encryption and decryption worked fine :-)

Save settings? [y/N] y
Configuration saved to '/home/oopsmonk/.s3cfg'
Change .s3cfg permission.
$ chmod 600 /home/oopsmonk/.s3cfg

$ s3cmd mb s3://test.s3cmd.cli
Bucket 's3://test.s3cmd.cli/' created

put file
$ s3cmd put ./README s3://test.s3cmd.cli/
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
./README -> s3://test.s3cmd.cli/README  [1 of 1]
13130 of 13130   100% in    1s    11.64 kB/s  done

delete file
$ s3cmd del s3://test.s3cmd.cli/README
File s3://test.s3cmd.cli/README deleted

delete Bucket
$ s3cmd rb s3://test.s3cmd.cli
Bucket 's3://test.s3cmd.cli/' removed

  Make bucket
      s3cmd mb s3://BUCKET
  Remove bucket
      s3cmd rb s3://BUCKET
  List objects or buckets
      s3cmd ls [s3://BUCKET[/PREFIX]]
  List all object in all buckets
      s3cmd la
  Put file into bucket
      s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
  Get file from bucket
      s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
  Delete file from bucket
      s3cmd del s3://BUCKET/OBJECT
  Synchronize a directory tree to S3
      s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR
  Disk usage by buckets
      s3cmd du [s3://BUCKET[/PREFIX]]
  Get various information about Buckets or Files
      s3cmd info s3://BUCKET[/OBJECT]
  Copy object
      s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
  Move object
      s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
  Modify Access control list for Bucket or Files
      s3cmd setacl s3://BUCKET[/OBJECT]
  Enable/disable bucket access logging
      s3cmd accesslog s3://BUCKET
  Sign arbitrary string using the secret key
      s3cmd sign STRING-TO-SIGN
  Fix invalid file names in a bucket
      s3cmd fixbucket s3://BUCKET[/PREFIX]
  Create Website from bucket
      s3cmd ws-create s3://BUCKET
  Delete Website
      s3cmd ws-delete s3://BUCKET
  Info about Website
      s3cmd ws-info s3://BUCKET
  List CloudFront distribution points
      s3cmd cflist
  Display CloudFront distribution point parameters
      s3cmd cfinfo [cf://DIST_ID]
  Create CloudFront distribution point
      s3cmd cfcreate s3://BUCKET
  Delete CloudFront distribution point
      s3cmd cfdelete cf://DIST_ID
  Change CloudFront distribution point parameters
      s3cmd cfmodify cf://DIST_ID
  Display CloudFront invalidation request(s) status
      s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]