go语言的协程和通道通信

周海汉 2013.9.17
许式伟的《go语言编程》,有一个简单的例子,描述go协程和通信通道chan。挺优美的。如下:

[andy@s1 test]$ cat sum.go
package main

import "fmt"

func sum(values []int, myChan chan int) {

    sum := 0
    for _, value := range values {
        sum += value
    }

     myChan <- sum //值传到通道
}

func main() {

    myChan := make( chan int,2)

    values := []int {1,2,3,5,5,4}
    go sum(values,myChan)  //协程1
    go sum(values[:3],myChan) //协程2

    sum1,sum2 := <-myChan, <-myChan
    fmt.Println("Result:",sum1,sum2,sum1+sum2)
}

结果:
[andy@s1 test]$ go run sum.go
Result: 20 6 26

发表在 技术 | 标签为 | 留下评论

linux rm -rf * 文件恢复记

周海汉/文 2013.9.12

手太快,肠子都毁清了。本来是删除一个文件 rm path/myfile.txt
结果不知为何加了个*,变成了
rm path/myfile.txt *
赶紧ls,发现所有代码都化为了乌有,还没提交,还没备份。删除时还不确认。一秒钟,世界就清净了。

带着侥幸的心情四处寻找,并无一处压缩包备份。有一些备份的地方也是很早期的工作。
欲哭无泪。

所以linux的rm删除时不先备份,真是要不得。难怪很多人rm时左看右看得过个半分钟才敢下手。有人建议直接将root下的rm改成mv的别名。
没办法,必须恢复。
机器在机房里,也不能断电拔硬盘或者重启。

首先,需立即将磁盘挂载为只读。

否则其他daemons 都来读写,神仙都恢复不了了。磁盘规划时一定要做功能分区。否则,误删了想恢复也很困难。比如linux安装时不分区整个装/下面,就很麻烦。
/data挂在/dev/sdb1上

[root@hs12 sh]# mount
/dev/sdb1 on /data type ext4 (rw)

[root@hs12 hadoop]# mount -r -n -o remount /data
mount: /data is busy
这需看看有哪些进程在用:
[root@hs12 hadoop]# fuser -v -m /data
可以看到有很多java和hadoop进程在使用,杀之。
[root@hs12 hadoop]# mount -r -n -o remount /data
成功。
再到/data里touch文件,报错。

[root@hs12 data]# touch a
touch: cannot touch `a’: Read-only file system

一下就放轻松了很多。因为改为只读挂载后,可以慢慢恢复,再也不用担心我的文件被覆盖。

使用debugfs

用debugfs查找被删文件的inode,再想法恢复。
[root@hs12 ~]# debugfs /dev/sdb1
debugfs 1.41.12 (17-May-2010)

debugfs:
debugfs: lsdel
Inode Owner Mode Size Blocks Time deleted
0 deleted inodes found.

神奇的debugfs 根本没找到有文件被删除的inodes,难道是我不会用?

失败!

使用grep恢复

grep 在磁盘二进制中查找文本,把前后的字符导出来,也许可以恢复部分。
[root@hs12 hadoop]# grep -a -B 100 -A 100 ‘active.sh’ /dev/sdb1 > results.txt
只有一些乱七八糟的二进制。
失败!

使用ext3grep

我的是ext4系统,根本不起作用。

 

只好寻找专业工具

用testdisk 6.14

使用介绍:
http://www.cgsecurity.org/wiki/TestDisk%3a_undelete_file_for_ext2
下载:
wget http://www.cgsecurity.org/testdisk-6.14.linux26-x86_64.tar.bz2
[root@hs12 hadoop]# cd testdisk-6.14
[root@hs12 testdisk-6.14]# ls
Android.mk ChangeLog documentation.html fidentify_static INFO l photorec.8 README testdisk.8 testdisk_static VERSION
AUTHORS COPYING fidentify.8 ico jni NEWS photorec_static readme.txt testdisk.log THANKS

[root@hs12 testdisk-6.14]# ./testdisk_static
TestDisk 6.14, Data Recovery Utility, July 2013
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org
1 P MS Data 2048 7811889151 7811887104 [primary]
Directory /

>drwxr-xr-x 500 500 4096 28-Aug-2013 13:41 .
drwxr-xr-x 500 500 4096 28-Aug-2013 13:41 ..
drwxrwxrwx 500 500 16384 18-Jul-2013 15:42 lost+found
drwxrwxrwx 500 500 12288 12-Sep-2013 00:36 logs

drwxrwxrwx 500 500 4096 25-Jul-2013 16:54 test1
drwxrwxr-x 500 500 4096 12-Sep-2013 03:28 statis
drwxrwxr-x 500 500 4096 12-Sep-2013 17:40 sh
drwxrwxr-x 500 500 12288 3-Sep-2013 15:28 hadoop

Next
Use Right to change directory, h to hide deleted files
q to quit, : to select the current file, a to select all files
C to copy the selected files, c to copy the current file

选到相应目录,enter,终于看到了删除的文件名,但是文件大小怎么都是0啊?
TestDisk 6.14, Data Recovery Utility, July 2013
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org
1 P MS Data 2048 7811889151 7811887104 [primary]
Directory /sh

drwxrwxr-x 500 500 4096 12-Sep-2013 17:40 .
drwxr-xr-x 500 500 4096 28-Aug-2013 13:41 ..
>-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 active.awk
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 active.sh
lrwxrwxrwx 500 500 13 2-Aug-2013 17:17 statis
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 dateutil.sh
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 hiveput.sh
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 multidate.sh
drwxrwxr-x 500 500 4096 3-Sep-2013 15:24 errlogs
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 hiveactive.sh
drwxrwxr-x 500 500 4096 12-Sep-2013 17:40 cps
drwxrwxr-x 500 500 4096 30-Aug-2013 15:21 TempStatsStore
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 bkactive.awk
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 test.awk
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 t.awk
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 print
-rw-rw-r– 500 500 0 12-Sep-2013 17:40 a
-rw-rw-r– 500 500 0 12-Sep-2013 17:40 a.txt
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 user.awk
-rw-rw-r– 500 500 0 12-Sep-2013 17:40 luan
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 cps.sh
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 hivenewdev.sh
-rw-rw-r– 500 500 0 12-Sep-2013 17:40 hive2mysql.sh
-rw-rw-r– 500 500 0 12-Sep-2013 17:40 py
lrwxrwxrwx 500 500 12 26-Aug-2013 09:34 userdata
lrwxrwxrwx 500 500 10 26-Aug-2013 09:34 bidata
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 bi.awk
-rw-r–r– 500 500 0 12-Sep-2013 17:40 luandoutang_09_900037.csv
-rw-rw-r– 500 500 0 12-Sep-2013 17:40 luan1
-rwxr-xr-x 500 500 0 12-Sep-2013 17:40 luan.awk
-rwxr-xr-x 500 500 0 12-Sep-2013 17:40 luan.sh
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 dvid_price.awk
-rwxrwxr-x 500 500 0 12-Sep-2013 17:40 cid_price.awk
lrwxrwxrwx 500 500 15 9-Sep-2013 13:33 adsdkdata
-rw-rw-r– 500 500 0 12-Sep-2013 17:40 0908.txt
-rw-rw-r– 500 500 0 12-Sep-2013 17:40 09081.txt
-rw-rw-r– 500 500 0 12-Sep-2013 17:40 09.txt
drwxrwxr-x 500 500 4096 9-Sep-2013 16:22 pid

TestDisk 6.14, Data Recovery Utility, July 2013

Please select a destination where /sh/active.awk will be copied.
Keys: Arrow keys to select another directory
C when the destination is correct
Q to quit

用a 选择所有文件,C 备份,选一个备份到的目录,确认。

进去一看,文件名都恢复了,但文件内容都是空的。号称能恢复ext4的testdisk恢复失败。

又下了新版testdisk-7.0-WIP.linux26-x86_64.tar.bz2,一样的问题。

用extundelete-0.2.4恢复

官方网站:

http://extundelete.sourceforge.net/

下载:

wget http://downloads.sourceforge.net/project/extundelete/extundelete/0.2.4/extundelete-0.2.4.tar.bz2

extundelete依赖e2fsprogs
[root@hs12 extundelete-0.2.4]# ./configure
Configuring extundelete 0.2.4
configure: error: Can’t find ext2fs library

[root@hs12 extundelete-0.2.4]# yum install e2fsprogs-devel

[root@hs12 extundelete-0.2.4]# ./configure
Configuring extundelete 0.2.4
Writing generated files to disk

[root@hs12 extundelete-0.2.4]# make & make install

[root@hs12 extundelete-0.2.4]# cd src
[root@hs12 src]# ls
block.c cli.cc extundelete-block.o extundelete-cli.o extundelete.h extundelete-priv.h jfs_compat.h Makefile Makefile.in
block.h extundelete extundelete.cc extundelete-extundelete.o extundelete-insertionops.o insertionops.cc kernel-jbd.h Makefile.am

[root@hs12 src]# ./extundelete
No action specified; implying –superblock.
./extundelete: Missing device name.
Usage: ./extundelete [options] [–] device-file
Options:
–version, -[vV] Print version and exit successfully.
–help, Print this help and exit successfully.
–superblock Print contents of superblock in addition to the rest.
If no action is specified then this option is implied.
–journal Show content of journal.
–after dtime Only process entries deleted on or after ‘dtime’.
–before dtime Only process entries deleted before ‘dtime’.
Actions:
–inode ino Show info on inode ‘ino’.
–block blk Show info on block ‘blk’.
–restore-inode ino[,ino,…]
Restore the file(s) with known inode number ‘ino’.
The restored files are created in ./RECOVERED_FILES
with their inode number as extension (ie, file.12345).
–restore-file ‘path’ Will restore file ‘path’. ‘path’ is relative to root
of the partition and does not start with a ‘/’
The restored file is created in the current
directory as ‘RECOVERED_FILES/path’.
–restore-files ‘path’ Will restore files which are listed in the file ‘path’.
Each filename should be in the same format as an option
to –restore-file, and there should be one per line.
–restore-directory ‘path’
Will restore directory ‘path’. ‘path’ is relative to the
root directory of the file system. The restored
directory is created in the output directory as ‘path’.
–restore-all Attempts to restore everything.
-j journal Reads an external journal from the named file.
-b blocknumber Uses the backup superblock at blocknumber when opening
the file system.
-B blocksize Uses blocksize as the block size when opening the file
system. The number should be the number of bytes.
–log 0 Make the program silent.
–log filename Logs all messages to filename.
–log D1=0,D2=filename Custom control of log messages with comma-separated
Examples below: list of options. Dn must be one of info, warn, or
–log info,error error. Omission of the ‘=name’ results in messages
–log warn=0 with the specified level to be logged to the console.
–log error=filename If the parameter is ‘=0’, logging for the specified
level will be turned off. If the parameter is
‘=filename’, messages with that level will be written
to filename.
-o directory Save the recovered files to the named directory.
The restored files are created in a directory
named ‘RECOVERED_FILES/’ by default.
./extundelete: Error parsing command-line options.

[root@hs12 src]# ./extundelete /dev/sdb1 –restore-directory /data/sh
NOTICE: Extended attributes are not restored.
Loading filesystem metadata … 29800 groups loaded.
Loading journal descriptors … 28266 descriptors loaded.
Failed to restore file /data/sh
Could not find correct inode number past inode 2.
Try altering the filename to one of the entries listed below.
File name | Inode number | Deleted status
. 2
.. 2
lost+found 11
logs 195821569
dfs 14942209
mapred 165806081
bidata 221380609
userdata 3407873
trackdata 112459777
adsdkdata 135135233
test 227409921
a.tar.gz 12
t1 13 Deleted
test1 227278849
statis 109051905
sh 24641537
hadoop 59506689
./extundelete: Operation not permitted while restoring directory.
./extundelete: Operation not permitted when trying to examine filesystem
[root@hs12 src]# ./extundelete /dev/sdb1 –restore-file /data/sh/active.awk
NOTICE: Extended attributes are not restored.
Loading filesystem metadata … 29800 groups loaded.
Loading journal descriptors … 28266 descriptors loaded.
Failed to restore file /data/sh/active.awk
Could not find correct inode number past inode 2.
Try altering the filename to one of the entries listed below.
File name | Inode number | Deleted status
. 2
.. 2
lost+found 11
logs 195821569
dfs 14942209
mapred 165806081
bidata 221380609
userdata 3407873
trackdata 112459777
adsdkdata 135135233
test 227409921
a.tar.gz 12
t1 13 Deleted
test1 227278849
statis 109051905
sh 24641537
hadoop 59506689
./extundelete: Operation not permitted while restoring file.
./extundelete: Operation not permitted when trying to examine filesystem

[root@hs12 RECOVERED_FILES]# ../extundelete /dev/sdb1 –restore-all
NOTICE: Extended attributes are not restored.
Loading filesystem metadata … 29800 groups loaded.
Loading journal descriptors … 28266 descriptors loaded.
[root@hs12 RECOVERED_FILES]# cd RECOVERED_FILES/
[root@hs12 RECOVERED_FILES]# cd sh
[root@hs12 sh]# ls
09081.txt a bknewdev.awk charge.sh derby.log hive2mysql.sh luan.awk newdev.awk so.awk
0908.txt active.awk b.txt charge.txt dvid_price.awk hiveactive.sh luandoutang_09_900037.csv newdev.sh t.awk
09.txt active.sh charge cid_price.awk emptycid hivenewdev.sh luan.sh pid.awk TempStatsStore
100001 adsdkdata charge_2013-09-09.txt cps err.txt hiveput.sh multidate.sh pid.sh test.awk
1dev.awk a.txt charge_20130909_.txt cps_newdev.java getdvid.awk insdata.py newdev print user.awk
201309081.txt bi.awk charge2mysql.sh cps.sh getmysql.sh luan newdev1.awk py
201309091.txt bkactive.awk charge.awk dateutil.sh getnewdev_from_mysql.sh luan1 newdev2mysql.sh sendmail.sh
[root@hs12 sh]# ls -l
total 225360
-rw-r–r– 1 root root 29251633 Sep 12 19:46 09081.txt
-rw-r–r– 1 root root 35249787 Sep 12 19:46 0908.txt
-rw-r–r– 1 root root 64501420 Sep 12 19:46 09.txt
-rw-r–r– 1 root root 2378 Sep 12 19:46 100001
-rw-r–r– 1 root root 840 Sep 12 19:46 1dev.awk
-rw-r–r– 1 root root 33931129 Sep 12 19:46 201309081.txt
-rw-r–r– 1 root root 27169653 Sep 12 19:46 201309091.txt
-rw-r–r– 1 root root 1 Sep 12 19:46 a
-rw-r–r– 1 root root 2227 Sep 12 19:46 active.awk
-rw-r–r– 1 root root 999 Sep 12 19:46 active.sh
-rw-r–r– 1 root root 19242484 Sep 12 19:46 adsdkdata
-rw-r–r– 1 root root 5626 Sep 12 19:46 a.txt
-rw-r–r– 1 root root 331 Sep 12 19:46 bi.awk
-rw-r–r– 1 root root 1543 Sep 12 19:46 bkactive.awk
-rw-r–r– 1 root root 931 Sep 12 19:46 bknewdev.awk
-rw-r–r– 1 root root 11 Sep 12 19:46 b.txt
-rw-r–r– 1 root root 230 Sep 12 19:46 charge
-rw-r–r– 1 root root 20964603 Sep 12 19:46 charge_2013-09-09.txt
-rw-r–r– 1 root root 229 Sep 12 19:46 charge_20130909_.txt
-rw-r–r– 1 root root 1243 Sep 12 19:46 charge2mysql.sh
-rw-r–r– 1 root root 428 Sep 12 19:46 charge.awk
-rw-r–r– 1 root root 2822 Sep 12 19:46 charge.sh
-rw-r–r– 1 root root 227 Sep 12 19:46 charge.txt
-rw-r–r– 1 root root 1227 Sep 12 19:46 cid_price.awk
drwxr-xr-x 2 root root 4096 Sep 12 19:46 cps
-rw-r–r– 1 root root 12070 Sep 12 19:46 cps_newdev.java
-rw-r–r– 1 root root 2764 Sep 12 19:46 cps.sh
-rw-r–r– 1 root root 885 Sep 12 19:46 dateutil.sh
-rw-r–r– 1 root root 992 Sep 12 19:46 derby.log
-rw-r–r– 1 root root 658 Sep 12 19:46 dvid_price.awk
-rw-r–r– 1 root root 54217 Sep 12 19:46 emptycid
-rw-r–r– 1 root root 64279 Sep 12 19:46 err.txt
-rw-r–r– 1 root root 379 Sep 12 19:46 getdvid.awk
-rw-r–r– 1 root root 1217 Sep 12 19:46 getmysql.sh
-rw-r–r– 1 root root 1552 Sep 12 19:46 getnewdev_from_mysql.sh
-rw-r–r– 1 root root 532 Sep 12 19:46 hive2mysql.sh
-rw-r–r– 1 root root 858 Sep 12 19:46 hiveactive.sh
-rw-r–r– 1 root root 926 Sep 12 19:46 hivenewdev.sh
-rw-r–r– 1 root root 683 Sep 12 19:46 hiveput.sh
-rw-r–r– 1 root root 2227 Sep 12 19:46 insdata.py
-rw-r–r– 1 root root 1045 Sep 12 19:46 luan
-rw-r–r– 1 root root 813 Sep 12 19:46 luan1
-rw-r–r– 1 root root 336 Sep 12 19:46 luan.awk
-rw-r–r– 1 root root 72909 Sep 12 19:46 luandoutang_09_900037.csv
-rw-r–r– 1 root root 180 Sep 12 19:46 luan.sh
-rw-r–r– 1 root root 420 Sep 12 19:46 multidate.sh
drwxr-xr-x 2 root root 4096 Sep 12 19:46 newdev
-rw-r–r– 1 root root 777 Sep 12 19:46 newdev1.awk
-rw-r–r– 1 root root 1290 Sep 12 19:46 newdev2mysql.sh
-rw-r–r– 1 root root 738 Sep 12 19:46 newdev.awk
-rw-r–r– 1 root root 762 Sep 12 19:46 newdev.sh
-rw-r–r– 1 root root 693 Sep 12 19:46 pid.awk
-rw-r–r– 1 root root 518 Sep 12 19:46 pid.sh
-rw-r–r– 1 root root 99 Sep 12 19:46 print
-rw-r–r– 1 root root 30324 Sep 12 19:46 py
-rw-r–r– 1 root root 160 Sep 12 19:46 sendmail.sh
-rw-r–r– 1 root root 744 Sep 12 19:46 so.awk
-rw-r–r– 1 root root 93 Sep 12 19:46 t.awk
drwxr-xr-x 2 root root 4096 Sep 12 19:46 TempStatsStore
-rw-r–r– 1 root root 311 Sep 12 19:46 test.awk
-rw-r–r– 1 root root 385 Sep 12 19:46 user.awk
[root@hs12 sh]# vi active.awk
查看,脚本都在。

整个恢复成功
所以唯一成功的是extundelete ,并且不能指定文件和目录,而是全部恢复,才能成功。

一块石头落了地:)
经验提供给后来者,一定要备份,磁盘要功能分区。rm命令要 alias rm=”rm -i”.

发表在 技术 | 标签为 , , , | 3条评论

awk 从shell传参数

2013.9.12
-v arg=value 方式传入。

[hadoop@hs12 sh]$ cat a
2|1|文字|
2|2|文字|
2|3|文字|

[hadoop@hs12 sh]$ awk -F “|” -v b=2 ‘{ if($2==b) { print $0;} }’ a
2|2|文字|

参考
http://blog.csdn.net/sosodream/article/details/5746315

发表在 技术 | 2条评论

在 shell 循环处理每天数据

周海汉/文 2013.9.5
日期循环,在处理某些按日期存放的数据中很有用。尤其是测试和补录,删除,重新处理数据。但是如果遇到跨月等情况,单纯用数值循环是不行的。
本shell即可用于处理多日数据情况。

#!/usr/bin/env bash
#author: Andy Zhou
#Date:2013.8.6

source dateutil.sh
begin=20130701
end=20130904

for (( d=$begin; d<=$end; d=`getnextday $d `)); do
echo "date:"$d
#. myshell.sh $d

日期工具 dateutil.sh:

#/usr/bin/env bash
#author:Andy Zhou
#date:2013.8.2
getnextday()
{
    #date -d "2013-09-10 +1 day " +%Y-%m-%d
    date -d "$1 +1 day " +%Y%m%d
}
getyearmonth()
{
    date +%Y%m --date=$1 #shortdate
}
getday()
{
    date +%d --date=$1 #shortdate
}

long_date()
{
    date +%Y-%m-%d --date=$1 #shortdate
}
short_date()
{
    date +%Y%m%d --date=$1 #longdate
}
long_yesterday()
{
     date --date='1 day ago' +%Y-%m-%d
}
yesterday()
{
     date --date='1 day ago' +%Y%m%d
}
long_today()
{
    date +%Y-%m-%d
}
today()
{
    date +%Y%m%d
}
now()
{
    date '+%Y-%m-%d %H:%M:%S'
}
last_month()
{
    date --date='1 month ago' '+%Y%m'
}
year()
{
     date +%Y
}
month()
{
    date +%m
}
sec2date()
{
     date -d "1970-01-01 UTC $1 seconds" "+%Y%m%d"
}
sec2datetime()
{
     date -d "1970-01-01 UTC $1 seconds" "+%Y%m%d %H:%M:%S"
}

发表在 技术 | 标签为 | 留下评论

Go 语言试用

周海汉 /文 2013.8.30

安装测试

官网
下载

https://code.google.com/p/go/downloads/list

解压后会生成go目录
 
[andy@s1 test]$ cat hello.go
package mainimport “fmt”

func main() {
fmt.Println(“Hello, 世界”)
}

[andy@s1 test]$ go build hello.go
hello.go:3:8: cannot find package “fmt” in any of:
/usr/local/go/src/pkg/fmt (from $GOROOT)
($GOPATH not set)
package runtime: cannot find package “runtime” in any of:
/usr/local/go/src/pkg/runtime (from $GOROOT)
($GOPATH not set)
配一下环境变量:
[andy@s1 ~]$ cat .bashrc
export GOROOT=/home/andy/go
export GOPATH=/home/andy/go/src/pkg
export PATH=$GOROOT/bin:$PATH
[andy@s1 test]$ go build hello.go

[andy@s1 test]$ ./hello
Hello, 世界

 

GOPATH

GOPATH是git 获取更新时下载到的地址。

 下面是以为外国朋友对GOPATH的测试: 

rday@rday-laptop:~/golang$ mkdir packages1
rday@rday-laptop:~/golang$ export GOPATH=~/golang/packages1/
rday@rday-laptop:~/golang$ go get github.com/rday/web
rday@rday-laptop:~/golang$ ls packages1/src/github.com/
rday
rday@rday-laptop:~/golang$ mkdir packages2
rday@rday-laptop:~/golang$ export GOPATH=~/golang/packages2/
rday@rday-laptop:~/golang$ go get github.com/alphazero/Go-Redis
rday@rday-laptop:~/golang$ ls packages2/src/github.com/
alphazero
rday@rday-laptop:~/golang$
When we change $GOPATH, and grab a new package, our new package is stored in the new $GOPATH directory
[andy@s1 test]$ !go
go build hello.go

 

测试mysql

安装go的mysql驱动:

[andy@s1 pkg]$ mkdir mysql
[andy@s1 pkg]$ cd mysql
[andy@s1 mysql]$ pwd
/home/andy/go/src/pkg/mysql

#[andy@s1 mysql]$ export GOPATH=/home/andy/go/src/pkg/mysql
[andy@s1 ~]$ echo $GOPATH
/home/andy/go
[andy@s1 ~]$ go get github.com/go-sql-driver/mysql
warning: GOPATH set to GOROOT (/home/andy/go) has no effect
package github.com/go-sql-driver/mysql: cannot download, $GOPATH must not be set to $GOROOT. For more details see: go help gopath

[andy@s1 ~]$ echo $GOPATH
/home/andy/go/src/pkg
[andy@s1 ~]$ go get github.com/go-sql-driver/mysql
[andy@s1 pkg]$ find . -name mysql
./src/github.com/go-sql-driver/mysql
[andy@s1 pkg]$ cp -r ./src/github.com/go-sql-driver/mysql mysql

[andy@s1 mysql]$ ls
buffer.go      const.go   driver_test.go  infile.go  packets.go  result.go  statement.go    utils.go
connection.go  driver.go  errors.go       LICENSE    README.md   rows.go    transaction.go  utils_test.go
[andy@s1 mysql]$ pwd
/home/andy/go/src/pkg/mysql/src/github.com/go-sql-driver/mysql

[root@s1 mysql]# yum install mysql-devel mysql-server
[root@s1 mysql]# service mysql restart
mysql> use test;
Database changed
mysql> show tables;
Empty set (0.00 sec)

CREATE TABLE `student` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(20) DEFAULT NULL,
  `age` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=
mysql> create table student(id int primary key auto_increment,name varchar(20),age int,created date) DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.08 sec)

[andy@s1 test]$ cat my.go

//andy zhou 2013.8.27
//http://abloz.com

package main
import (
    _ "mysql"
    "database/sql"
    "fmt"
)

func main() {
    db := opendb("root:@/test?charset=utf8")
    id:=insert(db)
    query(db)
    update(db,id)

}

//打开数据库连接
func opendb(dbstr string) ( * sql.DB) {
//dsn: [username[:password]@][protocol[(address)]]/dbname[?param1=value1&paramN=valueN]
    db, err := sql.Open("mysql", dbstr)
    prerr(err)
    return db
}

//插入数据
func insert(db  * sql.DB) int64 {

    stmt, err := db.Prepare("INSERT INTO student SET id=?, name=?,age=?,created=?")
    prerr(err)

    res, err := stmt.Exec(0, "abloz1", 28, "2013-8-20")
    prerr(err)

    id, err := res.LastInsertId()
    prerr(err)

    fmt.Println(id)
    return id

}
//更新数据
func update(db  *sql.DB,id int64) {
    stmt, err := db.Prepare("update student set name=? where id=?")
    prerr(err)

    res, err := stmt.Exec("abloz2", id)
    prerr(err)

    affect, err := res.RowsAffected()
    prerr(err)

    fmt.Println(affect)
}
//查询数据
func query(db  * sql.DB) {

    rows, err := db.Query("SELECT * FROM student")
    prerr(err)

    for rows.Next() {
        var id int
        var name string
        var department string
        var created string
        err = rows.Scan(&id, &name, &department, &created)
        prerr(err)
        fmt.Println(id)
        fmt.Println(name)
        fmt.Println(department)
        fmt.Println(created)
    }
}

//删除数据
func del(db  * sql.DB, id int64) {
    stmt, err := db.Prepare("delete from student where id=?")
    prerr(err)

    res, err := stmt.Exec(id)
    prerr(err)

    affect, err := res.RowsAffected()
    prerr(err)

    fmt.Println(affect)
}
func prerr(err error) {
    if err != nil {
        panic(err)
    }
}

执行:
[andy@s1 test]$ go build my.go

[andy@s1 test]$ ./my
4
1
hello周
30
2013-08-27
2
abloz2
28
2013-08-20
3
abloz2
28
2013-08-20
4
abloz1
28
2013-08-20
1

发表在 技术 | 标签为 | 2条评论

R 语言在centos6.4上的安装

周海汉 2013.8.30

CentOS 6.4 64位上安装。官方下载地址:
官方下载比较老。
R-2.10.0-2.el5.x86_64.rpm 09-Nov-2009 16:45 14K
R-core-2.10.0-2.el5.x86_64.rpm 09-Nov-2009 16:45 31M
R-devel-2.10.0-2.el5.x86_64.rpm 09-Nov-2009 16:45 87K
ReadMe 31-Aug-2009 15:30 262
libRmath-2.10.0-2.el5.x86_64.rpm 09-Nov-2009 16:45 102K
libRmath-devel-2.10.0-2.el5.x86_64.rpm 09-Nov-2009 16:45 148K
官方比较旧,rpm还安装失败:
[andy@s1 ~]$ sudo rpm -ivh R-core-2.10.0-2.el5.x86_64.rpm
warning: R-core-2.10.0-2.el5.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 97d3544e: NOKEY
error: Failed dependencies:
libtcl8.4.so()(64bit) is needed by R-core-2.10.0-2.el5.x86_64
libtk8.4.so()(64bit) is needed by R-core-2.10.0-2.el5.x86_64
perl(File::Copy::Recursive) is needed by R-core-2.10.0-2.el5.x86_64
[andy@s1 ~]$ sudo yum install R
No package R available.
Error: Nothing to do
更新源到fedoraproject
[andy@s1 ~]$ sudo yum install R
Downloading Packages:
(1/13): R-3.0.1-2.el6.x86_64.rpm                         |  19 kB     00:00
(2/13): R-core-3.0.1-2.el6.x86_64.rpm                                                                                                                                                                                 |  46 MB     02:59
(3/13): R-core-devel-3.0.1-2.el6.x86_64.rpm                                                                                                                                                                           |  90 kB     00:00
(4/13): R-devel-3.0.1-2.el6.x86_64.rpm                                                                                                                                                                                |  19 kB     00:00
(5/13): R-java-3.0.1-2.el6.x86_64.rpm                                                                                                                                                                                 |  20 kB     00:00
(6/13): R-java-devel-3.0.1-2.el6.x86_64.rpm                                                                                                                                                                           |  19 kB     00:00
(7/13): libRmath-3.0.1-2.el6.x86_64.rpm                                                                                                                                                                               | 115 kB     00:00
(8/13): libRmath-devel-3.0.1-2.el6.x86_64.rpm                                                                                                                                                                         |  24 kB     00:00
(9/13): pcre-devel-7.8-6.el6.x86_64.rpm                                                                                                                                                                               | 318 kB     00:00
(10/13): tcl-devel-8.5.7-6.el6.x86_64.rpm                                                                                                                                                                             | 162 kB     00:00
(11/13): texinfo-4.13a-8.el6.x86_64.rpm                                                                                                                                                                               | 668 kB     00:00
(12/13): texinfo-tex-4.13a-8.el6.x86_64.rpm                                                                                                                                                                           | 132 kB     00:00
(13/13): tk-devel-8.5.7-5.el6.x86_64.rpm
[andy@s1 ~]$ R

R version 3.0.1 (2013-05-16) — “Good Sport”
Copyright (C) 2013 The R Foundation for Statistical Computing
Platform: x86_64-redhat-linux-gnu (64-bit)

R是自由软件,不带任何担保。
在某些条件下你可以将其自由散布。
用’license()’或’licence()’来看散布的详细条件。

R是个合作计划,有许多人为之做出了贡献.
用’contributors()’来看合作者的详细情况
用’citation()’会告诉你如何在出版物中正确地引用R或R程序包。

用’demo()’来看一些示范程序,用’help()’来阅读在线帮助文件,或
用’help.start()’通过HTML浏览器来看帮助文件。
用’q()’退出R.

> demo(graphics)

可以查看R能画哪些类型的图。
 
各种字符展示
> demo(Hershey)
退出
> q()
Save workspace image? [y/n/c]: n
发表在 技术 | 标签为 | 留下评论

sqoop 从 hive 导到mysql遇到的问题

周海汉/文 2013.8.22

环境

hive 版本hive-0.11.0
sqoop 版本 sqoop-1.4.4.bin__hadoop-1.0.0
从hive导到mysql
mysql 表:

mysql> desc cps_activation;

+————+————-+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+————+————-+——+—–+———+—————-+
| id | int(11) | NO | PRI | NULL | auto_increment |
| day | date | NO | MUL | NULL | |
| pkgname | varchar(50) | YES | | NULL | |
| cid | varchar(50) | YES | | NULL | |
| pid | varchar(50) | YES | | NULL | |
| activation | int(11) | YES | | NULL | |
+————+————-+——+—–+———+—————-+
6 rows in set (0.01 sec)

 

hive表

hive> desc active;
OK
id int None
day string None
pkgname string None
cid string None
pid string None
activation int None

测试链接成功

[hadoop@hs11 ~]sqoop list-databases –connect jdbc:mysql://localhost:3306/ –username root –password admin
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
13/08/20 16:42:26 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/08/20 16:42:26 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
easyhadoop
mysql
test
[hadoop@hs11 ~]$ sqoop list-databases –connect jdbc:mysql://localhost:3306/test –username root –password admin
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
13/08/20 16:42:40 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/08/20 16:42:40 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
easyhadoop
mysql
test
[hadoop@hs11 ~]$ sqoop list-tables –connect jdbc:mysql://localhost:3306/test –username root –password admin
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
13/08/20 16:42:54 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/08/20 16:42:54 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
active

[hadoop@hs11 ~]$  sqoop create-hive-table –connect jdbc:mysql://localhost:3306/test –table active –username root –password admin –hive-table test
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
13/08/20 16:57:04 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/08/20 16:57:04 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
13/08/20 16:57:04 INFO tool.BaseSqoopTool: delimiters with –fields-terminated-by, etc.
13/08/20 16:57:04 WARN tool.BaseSqoopTool: It seems that you’ve specified at least one of following:
13/08/20 16:57:04 WARN tool.BaseSqoopTool:      –hive-home
13/08/20 16:57:04 WARN tool.BaseSqoopTool:      –hive-overwrite
13/08/20 16:57:04 WARN tool.BaseSqoopTool:      –create-hive-table
13/08/20 16:57:04 WARN tool.BaseSqoopTool:      –hive-table
13/08/20 16:57:04 WARN tool.BaseSqoopTool:      –hive-partition-key
13/08/20 16:57:04 WARN tool.BaseSqoopTool:      –hive-partition-value
13/08/20 16:57:04 WARN tool.BaseSqoopTool:      –map-column-hive
13/08/20 16:57:04 WARN tool.BaseSqoopTool: Without specifying parameter –hive-import. Please note that
13/08/20 16:57:04 WARN tool.BaseSqoopTool: those arguments will not be used in this session. Either
13/08/20 16:57:04 WARN tool.BaseSqoopTool: specify –hive-import to apply them correctly or remove them
13/08/20 16:57:04 WARN tool.BaseSqoopTool: from command line to remove this warning.
13/08/20 16:57:04 INFO tool.BaseSqoopTool: Please note that –hive-home, –hive-partition-key,
13/08/20 16:57:04 INFO tool.BaseSqoopTool:       hive-partition-value and –map-column-hive options are
13/08/20 16:57:04 INFO tool.BaseSqoopTool:       are also valid for HCatalog imports and exports
13/08/20 16:57:04 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
13/08/20 16:57:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `active` AS t LIMIT 1
13/08/20 16:57:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `active` AS t LIMIT 1
13/08/20 16:57:05 WARN hive.TableDefWriter: Column day had to be cast to a less precise type in Hive
13/08/20 16:57:05 INFO hive.HiveImport: Loading uploaded data into Hive

1、拒绝连接

[hadoop@hs11 ~]$ sqoop export –connect jdbc:mysql://localhost/test –username root  –password admin –table test –export-dir /user/hive/warehouse/actmp
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
13/08/21 09:14:07 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/08/21 09:14:07 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
13/08/21 09:14:07 INFO tool.CodeGenTool: Beginning code generation
13/08/21 09:14:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1
13/08/21 09:14:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1
13/08/21 09:14:07 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/hadoop-1.1.2
Note: /tmp/sqoop-hadoop/compile/0b5cae714a00b3940fb793c3694408ac/test.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/08/21 09:14:08 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/0b5cae714a00b3940fb793c3694408ac/test.jar
13/08/21 09:14:08 INFO mapreduce.ExportJobBase: Beginning export of test
13/08/21 09:14:09 INFO input.FileInputFormat: Total input paths to process : 1
13/08/21 09:14:09 INFO input.FileInputFormat: Total input paths to process : 1
13/08/21 09:14:09 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/08/21 09:14:09 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/21 09:14:10 INFO mapred.JobClient: Running job: job_201307251523_0059
13/08/21 09:14:11 INFO mapred.JobClient:  map 0% reduce 0%
13/08/21 09:14:20 INFO mapred.JobClient: Task Id : attempt_201307251523_0059_m_000000_0, Status : FAILED
java.io.IOException: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:

** BEGIN NESTED EXCEPTION **

java.net.ConnectException
MESSAGE: Connection refused

STACKTRACE:

java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at java.net.Socket.connect(Socket.java:478)
at java.net.Socket.<init>(Socket.java:375)
at java.net.Socket.<init>(Socket.java:218)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:256)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:271)
at com.mysql.jdbc.Connection.createNewIO(Connection.java:2771)
at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:294)
at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.<init>(AsyncSqlRecordWriter.java:76)
at org.apache.sqoop.mapreduce.ExportOutputFormat$ExportRecordWriter.<init>(ExportOutputFormat.java:95)
at org.apache.sqoop.mapreduce.ExportOutputFormat.getRecordWriter(ExportOutputFormat.java:77)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:628)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:753)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

** END NESTED EXCEPTION **

Last packet sent to the server was 1 ms ago.
at org.apache.sqoop.mapreduce.ExportOutputFormat.getRecordWriter(ExportOutputFormat.java:79)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:628)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:753)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:

** BEGIN NESTED EXCEPTION **

java.net.ConnectException
MESSAGE: Connection refused

mysql 用户权限问题
 mysql> show grants;
mysql> GRANT ALL PRIVILEGES ON *.* TO ‘root’@’%’ IDENTIFIED BY PASSWORD ‘*4ACFE3202A5FF5CF467898FC58AAB1D615029441’ WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
mysql> create table test (mkey varchar(30),pkg varchar(50),cid varchar(20),pid varchar(50),count int,primary key(mkey,pkg,cid,pid) );
alter ignore table cps_activation add unique index_day_pkgname_cid_pid (`day`,`pkgname`,`cid`,`pid`);
Query OK, 0 rows affected (0.03 sec)

2. 表不存在

===========
[hadoop@hs11 ~]$ sqoop export –connect jdbc:mysql://10.10.20.11/test –username root  –password admin –table test –export-dir /user/hive/warehouse/actmp
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
13/08/21 09:16:26 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/08/21 09:16:26 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
13/08/21 09:16:26 INFO tool.CodeGenTool: Beginning code generation
13/08/21 09:16:27 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1
13/08/21 09:16:27 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1
13/08/21 09:16:27 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/hadoop-1.1.2
Note: /tmp/sqoop-hadoop/compile/74d18a91ec141f2feb777dc698bf7eb4/test.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/08/21 09:16:28 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/74d18a91ec141f2feb777dc698bf7eb4/test.jar
13/08/21 09:16:28 INFO mapreduce.ExportJobBase: Beginning export of test
13/08/21 09:16:29 INFO input.FileInputFormat: Total input paths to process : 1
13/08/21 09:16:29 INFO input.FileInputFormat: Total input paths to process : 1
13/08/21 09:16:29 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/08/21 09:16:29 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/21 09:16:29 INFO mapred.JobClient: Running job: job_201307251523_0060
13/08/21 09:16:30 INFO mapred.JobClient:  map 0% reduce 0%
13/08/21 09:16:38 INFO mapred.JobClient: Task Id : attempt_201307251523_0060_m_000000_0, Status : FAILED
java.io.IOException: Can’t export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.util.NoSuchElementException
at java.util.AbstractList$Itr.next(AbstractList.java:350)
at test.__loadFromFields(test.java:252)
at test.parse(test.java:201)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
… 10 more
导出数据到MySQL,当然数据库表要先存在,否则会报错
此错误的原因为sqoop解析文件的字段与MySql数据库的表的字段对应不上造成的。因此需要在执行的时候给sqoop增加参数,告诉sqoop文件的分隔符,使它能够正确的解析文件字段。hive默认的字段分隔符为’01’
===========

3. null字段填充符需指定

没有指定null字段分隔符,导致错位。
[hadoop@hs11 ~]$ sqoop export –connect jdbc:mysql://10.10.20.11/test –username root  –password admin –table test –export-dir /user/hive/warehouse/actmp –input-fields-tminated-by ’01’
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
13/08/21 09:21:07 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/08/21 09:21:07 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
13/08/21 09:21:07 INFO tool.CodeGenTool: Beginning code generation
13/08/21 09:21:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1
13/08/21 09:21:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1
13/08/21 09:21:07 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/hadoop-1.1.2
Note: /tmp/sqoop-hadoop/compile/04d183c9e534cdb8d735e1bdc4be3deb/test.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/08/21 09:21:08 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/04d183c9e534cdb8d735e1bdc4be3deb/test.jar
13/08/21 09:21:08 INFO mapreduce.ExportJobBase: Beginning export of test
13/08/21 09:21:09 INFO input.FileInputFormat: Total input paths to process : 1
13/08/21 09:21:09 INFO input.FileInputFormat: Total input paths to process : 1
13/08/21 09:21:09 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/08/21 09:21:09 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/21 09:21:10 INFO mapred.JobClient: Running job: job_201307251523_0061
13/08/21 09:21:11 INFO mapred.JobClient:  map 0% reduce 0%
13/08/21 09:21:17 INFO mapred.JobClient:  map 25% reduce 0%
13/08/21 09:21:19 INFO mapred.JobClient:  map 50% reduce 0%
13/08/21 09:21:21 INFO mapred.JobClient: Task Id : attempt_201307251523_0061_m_000001_0, Status : FAILED
java.io.IOException: Can’t export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.NumberFormatException: For input string: “665A5FFA-32C9-9463-1943-840A5FEAE193”
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Integer.parseInt(Integer.java:458)
at java.lang.Integer.valueOf(Integer.java:554)
at test.__loadFromFields(test.java:264)
at test.parse(test.java:201)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
… 10 more
===========

4.成功

[hadoop@hs11 ~]$ sqoop export –connect jdbc:mysql://10.10.20.11/test –username root  –password admin –table test –export-dir /user/hive/warehouse/actmp –input-fields-terminated-by ’01’ –input-null-string ‘\N’ –input-null-non-string ‘\N’
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
13/08/21 09:36:13 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/08/21 09:36:13 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
13/08/21 09:36:13 INFO tool.CodeGenTool: Beginning code generation
13/08/21 09:36:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1
13/08/21 09:36:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1
13/08/21 09:36:13 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/hadoop-1.1.2
Note: /tmp/sqoop-hadoop/compile/e22d31391498b790d799897cde25047d/test.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/08/21 09:36:14 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/e22d31391498b790d799897cde25047d/test.jar
13/08/21 09:36:14 INFO mapreduce.ExportJobBase: Beginning export of test
13/08/21 09:36:15 INFO input.FileInputFormat: Total input paths to process : 1
13/08/21 09:36:15 INFO input.FileInputFormat: Total input paths to process : 1
13/08/21 09:36:15 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/08/21 09:36:15 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/21 09:36:16 INFO mapred.JobClient: Running job: job_201307251523_0064
13/08/21 09:36:17 INFO mapred.JobClient:  map 0% reduce 0%
13/08/21 09:36:23 INFO mapred.JobClient:  map 25% reduce 0%
13/08/21 09:36:25 INFO mapred.JobClient:  map 100% reduce 0%
13/08/21 09:36:27 INFO mapred.JobClient: Job complete: job_201307251523_0064
13/08/21 09:36:27 INFO mapred.JobClient: Counters: 18
13/08/21 09:36:27 INFO mapred.JobClient:   Job Counters
13/08/21 09:36:27 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=13151
13/08/21 09:36:27 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/08/21 09:36:27 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/08/21 09:36:27 INFO mapred.JobClient:     Rack-local map tasks=2
13/08/21 09:36:27 INFO mapred.JobClient:     Launched map tasks=4
13/08/21 09:36:27 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/08/21 09:36:27 INFO mapred.JobClient:   File Output Format Counters
13/08/21 09:36:27 INFO mapred.JobClient:     Bytes Written=0
13/08/21 09:36:27 INFO mapred.JobClient:   FileSystemCounters
13/08/21 09:36:27 INFO mapred.JobClient:     HDFS_BYTES_READ=1519
13/08/21 09:36:27 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=234149
13/08/21 09:36:27 INFO mapred.JobClient:   File Input Format Counters
13/08/21 09:36:27 INFO mapred.JobClient:     Bytes Read=0
13/08/21 09:36:27 INFO mapred.JobClient:   Map-Reduce Framework
13/08/21 09:36:27 INFO mapred.JobClient:     Map input records=6
13/08/21 09:36:27 INFO mapred.JobClient:     Physical memory (bytes) snapshot=663863296
13/08/21 09:36:27 INFO mapred.JobClient:     Spilled Records=0
13/08/21 09:36:27 INFO mapred.JobClient:     CPU time spent (ms)=3720
13/08/21 09:36:27 INFO mapred.JobClient:     Total committed heap usage (bytes)=2013790208
13/08/21 09:36:27 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=5583151104
13/08/21 09:36:27 INFO mapred.JobClient:     Map output records=6
13/08/21 09:36:27 INFO mapred.JobClient:     SPLIT_RAW_BYTES=571
13/08/21 09:36:27 INFO mapreduce.ExportJobBase: Transferred 1.4834 KB in 12.1574 seconds (124.9446 bytes/sec)
13/08/21 09:36:27 INFO mapreduce.ExportJobBase: Exported 6 records.
———-

5. mysql字符串长度定义太短,存不下

java.io.IOException: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column ‘pid’ at row 1
at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.close(AsyncSqlRecordWriter.java:192)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:651)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column ‘pid’ at row 1
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2983)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1631)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1723)
at com.mysql.jdbc.Connection.execSQL(Connection.java:3283)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1332)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:882)
at org.apache.sqoop.mapreduce.AsyncSqlOutputFormat$AsyncSqlExecThread.run(AsyncSqlOutputFormat.java:233)
———————-

6.日期格式问题

mysql date日期格式,hive中字符串必须是yyyy-mm-dd, 我原来使用yyyymmdd,报下面的错误。
13/08/21 17:42:44 INFO mapred.JobClient: Task Id : attempt_201307251523_0079_m_000000_1, Status : FAILED
java.io.IOException: Can’t export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.IllegalArgumentException
at java.sql.Date.valueOf(Date.java:138)
at cps_activation.__loadFromFields(cps_activation.java:308)
at cps_activation.parse(cps_activation.java:255)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
… 10 more
———————-

7. 字段对不上或字段类型不一致

Caused by: java.lang.NumberFormatException: For input string: “06701A4A-0808-E9A8-0D28-A8020B494E37”
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Integer.parseInt(Integer.java:458)
at java.lang.Integer.valueOf(Integer.java:554)
at test.__loadFromFields(test.java:264)
at test.parse(test.java:201)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
… 10 more
发表在 技术 | 标签为 , , | 一条评论

scala HelloWorld

周海汉/文 2013.8.1

最近网络封锁太严,日志都发不出来了,只好重发。

scala,一种基于JVM虚拟机的函数式语言,以其编程效率和分布式处理能力著称。spark 就是用scala写的。
下载:
[hadoop@hs11 scala-2.11.0-M4]$ scala
Welcome to Scala version 2.11.0-M4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_45).
Type in expressions to have them evaluated.
Type :help for more information.

scala> object Hello{
| def main(arg: Array[String]) {
| println(“hello world”)
| }
| }
defined object Hello

scala> Hello.main(null)
hello world
scala> :q
[hadoop@hs11 examples]$ cat HelloWorld.scala
package examples
object HelloWorld {
def main(args: Array[String]) {
println(“Hello, world!”)
}
}
[hadoop@hs11 examples]$ scalac HelloWorld.scala
[hadoop@hs11 examples]$ ls examples/Hello*
examples/HelloWorld.class  examples/HelloWorld$.class
[hadoop@hs11 examples]$ scala examples.HelloWorld
Hello, world!
如果继承自App,对象中所有语句自动执行。可以省去main函数
[hadoop@hs11 examples]$ cat Hello1.scala
package examples
object Hello1 extends App {
println(“Hello world, App!”)
}
[hadoop@hs11 examples]$ scala examples.Hello1
Hello world, App!
发表在 技术 | 标签为 | 留下评论

sqoop 1.99 安装配置

周海汉/文 2013.8.20
http://abloz.com

摘要:
1. sqoop 1.99的安装配置
2. client使用
3. 从HBase,Hive导数据到mysql

版本
sqoop-1.99.2-bin-hadoop100

Sqoop是Hadoop系统数据和RDBMS互导数据的工具。1.99新版的一个包里包含两个部分:客户端和服务器端。必须在集群中安装好一个节点的服务器端。所有客户端和该服务器端相连。服务器端作为mapreduce的client,所以Hadoop必须和Sqoop服务器端装在一起。客户端则不限。这种设计让导数据更灵活。原1.44版则没有区分服务器端和客户端。

服务器端安装

1.确认本机有hadoop
hadoop fs -ls

由于hadoop的主版本1.xx和2.xx不兼容,所以sqoop的二进制版本也是分100和200的。如我在hadoop 1.1.2中使用sqoop-1.99.2-bin-hadoop100来进行配套。
解压
tar zxvf sqoop-1.99.2-bin-hadoop100.tar.gz
cd sqoop-1.99.2-bin-hadoop100

2. 安装依赖库和组件

./bin/addtowar.sh -hadoop-auto

[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ ./bin/addtowar.sh -hadoop-auto

Non of expected directories with Hadoop exists

Usage : addtowar.sh
Options: -hadoop-auto Try to guess hadoop version and path
-hadoop-version HADOOP_VERSION Specify used version
-hadoop-path HADOOP_PATHS Where to find hadoop jars (multiple paths with Hadoop jars separated by ‘:’)
-jars JARS_PATH Special jars that should be added (multiple JAR paths separated by ‘:’)
-war SQOOP_WAR Target Sqoop war file where all jars should be ingested

由于我的hadoop没有安装到系统路径,所以需修改配置文件。
[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ vi ./bin/addtowar.sh
hadoopPossiblePaths=”/home/hadoop/hadoop-1.1.2 /usr/lib/hadoop /usr/lib/hadoop-mapreduce/ /usr/lib/hadoop-yarn/ /usr/lib/hadoop-hdfs”

[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ ./bin/addtowar.sh -hadoop-auto
./bin/addtowar.sh: line 126: [: missing `]’
Hadoop version: 1.1.2
Hadoop path: /home/hadoop/hadoop-1.1.2
Extra jars:

Injecting following Hadoop JARs

/home/hadoop/hadoop-1.1.2/hadoop-core-1.1.2.jar
/home/hadoop/hadoop-1.1.2/lib/jackson-core-asl-1.8.8.jar
/home/hadoop/hadoop-1.1.2/lib/jackson-mapper-asl-1.8.8.jar
/home/hadoop/hadoop-1.1.2/lib/commons-configuration-1.6.jar
/home/hadoop/hadoop-1.1.2/lib/commons-logging-api-1.0.4.jar
/home/hadoop/hadoop-1.1.2/lib/slf4j-api-1.4.3.jar
/home/hadoop/hadoop-1.1.2/lib/slf4j-log4j12-1.4.3.jar

Backing up original WAR file to ./bin/../server/webapps/sqoop.war_2013-08-20_09:36:01.263437795

New Sqoop WAR file with added ‘Hadoop JARs’ at ./bin/../server/webapps/sqoop.war

脚本126行]有个空格问题,但不影响结果。改后:
[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ vi ./bin/addtowar.sh
[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ ./bin/addtowar.sh -hadoop-auto
Hadoop version: 1.1.2
Hadoop path: /home/hadoop/hadoop-1.1.2
Extra jars:

Specified Sqoop WAR ‘./bin/../server/webapps/sqoop.war’ already contains Hadoop JAR files

也可以指定,-hadoop-version指定版本,-hadoop-path 指定目录。如果各部分安装在多个目录,则用:分隔。
如:
hadoop 1.0
./bin/addtowar.sh -hadoop-version 1.0 -hadoop-path /usr/lib/hadoop-common:/usr/lib/hadoop-hdfs:/usr/lib/hadoop-mpred

hadoop 2.0
./bin/addtowar.sh -hadoop-version 2.0 -hadoop-path /usr/lib/hadoop-common:/usr/lib/hadoop-hdfs:/usr/lib/hadoop-yarn

也可以用addtowar.sh的-jars参数来绑定其他的jar文件。
绑定jdbc mysql库:
由于不同协议,sqoop没有自带mysql jdbc库,需下载,并绑定:
http://dev.mysql.com/downloads/mirror.php?id=13597
wget http://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.0.8.tar.gz/from/http://cdn.mysql.com/

注意下载的是tar.gz, 里面有源码和jar包。只需解压后使用jar包即可。

[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ ./bin/addtowar.sh -jars /home/hadoop/hive-0.11.0/lib/mysql-connector-java-5.0.8-bin.jar
Hadoop version:
Hadoop path:
Extra jars: /home/hadoop/hive-0.11.0/lib/mysql-connector-java-5.0.8-bin.jar

Injecting following additional JARs

/home/hadoop/hive-0.11.0/lib/mysql-connector-java-5.0.8-bin.jar

Backing up original WAR file to ./bin/../server/webapps/sqoop.war_2013-08-20_09:49:32.401896012

New Sqoop WAR file with added ‘JARs’ at ./bin/../server/webapps/sqoop.war

3.配置sqoop服务器
server/conf 里面存放服务器的配置文件。包括tomcat等配置。但缺省配置PropertiesConfigurationProvider足够用。如需修改,编辑sqoop_bootstrap.properties的sqoop.config.provider即可。
[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ cd server/conf
[hadoop@hs11 conf]$ ls
catalina.policy catalina.properties context.xml logging.properties server.xml sqoop_bootstrap.properties sqoop.properties tomcat-users.xml web.xml
[hadoop@hs11 conf]$ cat sqoop_bootstrap.properties
sqoop.config.provider=org.apache.sqoop.core.PropertiesConfigurationProvider

sqoop.properties包含其他的一些修改。可能需要微调,以适应环境需要。
#org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/etc/hadoop/conf/
org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/home/hadoop/hadoop-1.1.2/conf/

4. 启动和停止服务器
./bin/sqoop.sh server start

[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ ./bin/sqoop.sh server start
Sqoop home directory: /home/hadoop/sqoop-1.99.2-bin-hadoop100…
Using CATALINA_BASE: /home/hadoop/sqoop-1.99.2-bin-hadoop100/server
Using CATALINA_HOME: /home/hadoop/sqoop-1.99.2-bin-hadoop100/server
Using CATALINA_TMPDIR: /home/hadoop/sqoop-1.99.2-bin-hadoop100/server/temp
Using JRE_HOME: /usr/java/jdk1.6.0_45
Using CLASSPATH: /home/hadoop/sqoop-1.99.2-bin-hadoop100/server/bin/bootstrap.jar

[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ ./bin/sqoop.sh server stop
Sqoop home directory: /home/hadoop/sqoop-1.99.2-bin-hadoop100…
Using CATALINA_BASE: /home/hadoop/sqoop-1.99.2-bin-hadoop100/server
Using CATALINA_HOME: /home/hadoop/sqoop-1.99.2-bin-hadoop100/server
Using CATALINA_TMPDIR: /home/hadoop/sqoop-1.99.2-bin-hadoop100/server/temp
Using JRE_HOME: /usr/java/jdk1.6.0_45
Using CLASSPATH: /home/hadoop/sqoop-1.99.2-bin-hadoop100/server/bin/bootstrap.jar

5.客户端安装
客户端无需配置,只需将下载版本解压即可。
[hadoop@hs11 sqoop-1.99.2-bin-hadoop100]$ ./bin/sqoop.sh client
Sqoop home directory: /home/hadoop/sqoop-1.99.2-bin-hadoop100…
Aug 20, 2013 10:07:54 AM java.util.prefs.FileSystemPreferences$2 run
INFO: Created user preferences directory.
Sqoop Shell: Type ‘help’ or ‘h’ for help.

sqoop:000>
或执行sqoop脚本:
sqoop.sh client /path/to/your/script.sqoop

具体使用在下一篇详述。
6.参考:
http://sqoop.apache.org/docs/1.99.2/Installation.html
http://sqoop.apache.org/docs/1.99.2/Sqoop5MinutesDemo.html
http://sqoop.apache.org/docs/1.99.2/CommandLineClient.html

发表在 杂感 | 标签为 | 留下评论

erlang领悟

abloz.com

周海汉 /文 2013.8.6

erlang作为著名的并发编程语言,在大规模并发计算上很独到。但它的怪异的语法和独特的约定,让学习曲线很陡。OTP意思是open telecom platform.

 

1.编译安装:

很中规中矩,一点不特殊。

./configure
make
sudo make install
erl shell
strider 1> erl
Erlang (BEAM) emulator version 5.3 [hipe] [threads:0]

Eshell V5.3  (abort with ^G)
1>Str = "abcd".
"abcd"
2> L = length(Str).
4
3> Descriptor = {L, list_to_atom(Str)}.
{4,abcd}
4> L.
4
5> b().
Descriptor = {4,abcd}
L = 4
Str = "abcd"
ok
6> f(L).
ok
7> b().
Descriptor = {4,abcd}
Str = "abcd"
ok
8> f(L).
ok
9> {L, _} = Descriptor.
{4,abcd}
10> L.
4
11> {P, Q, R} = Descriptor.
** exception error: no match of right hand side value {4,abcd}
12> P.
* 1: variable 'P' is unbound **
13> Descriptor.
{4,abcd}
14>{P, Q} = Descriptor.
{4,abcd}
15> P.
4
16> f().
ok
17> put(aa, hello).
undefined
18> get(aa).
hello
19> Y = test1:demo(1).
11
20> get().
[{aa,worked}]
21> put(aa, hello).
worked
22> Z = test1:demo(2).
** exception error: no match of right hand side value 1
     in function  test1:demo/1
23> Z.
* 1: variable 'Z' is unbound **
24> get(aa).
hello
25> erase(), put(aa, hello).
undefined
26> spawn(test1, demo, [1]).
<0.57.0>
27> get(aa).
hello
28> io:format("hello hellon").
hello hello
ok
29> e(28).
hello hello
ok
30> v(28).
ok
31> c(ex).
{ok,ex}
32> rr(ex).
[rec]
33> rl(rec).
-record(rec,{a,b = val()}).
ok
34> #rec{}.
** exception error: undefined shell command val/0
35> #rec{b = 3}.
#rec{a = undefined,b = 3}
36> rp(v(-1)).
#rec{a = undefined,b = 3}
ok
37> rd(rec, {f = orddict:new()}).
rec
38> #rec{}.
#rec{f = []}
ok
39> rd(rec, {c}), A.
* 1: variable 'A' is unbound **
40> #rec{}.
#rec{c = undefined}
ok
41> test1:loop(0).
Hello Number: 0
Hello Number: 1
Hello Number: 2
Hello Number: 3

User switch command
 --> i
 --> c
.
.
.
Hello Number: 3374
Hello Number: 3375
Hello Number: 3376
Hello Number: 3377
Hello Number: 3378
** exception exit: killed
42> E = ets:new(t, []).
17
43> ets:insert({d,1,2}).
** exception error: undefined function ets:insert/1
44> ets:insert(E, {d,1,2}).
** exception error: argument is of wrong type
     in function  ets:insert/2
        called as ets:insert(16,{d,1,2})
45> f(E).
ok
46> catch_exception(true).
false
47> E = ets:new(t, []).
18
48> ets:insert({d,1,2}).
* exception error: undefined function ets:insert/1
49> ets:insert(E, {d,1,2}).
true
50> halt().
strider 2>

感受:
erlang 结束语句是句号"."。所以初次使用shell像python那样用不会有任何提示,也不报错。因为没有输入点好结束。
其变量是大写开头,并且绑定后不能改变内容。
pid!msg 表示往pid进程发送msg。-> 可以认为是函数定义,类似于C语言的大括号。

命令行用到的两个程序:
[hadoop@hs11 erl]$ cat test1.erl
-module(test1).
-export([demo/1, loop/1]).

demo(X) ->
put(aa,hello),
X + 10.

loop(N) ->
io:format(“Hello Number: ~w~n”,[N]),
loop(N+1).

[hadoop@hs11 erl]$ cat math1.erl
-module(math1).
-export([fib/1, fac/1]).

fib(0)->1;
fib(1)->1;
fib(N)->fib(N-1)+fib(N-2).

fac(0)->1;
fac(N)->N*fac(N-1).

发表在 技术 | 标签为 | 留下评论