Skip to content
This repository was archived by the owner on Jul 18, 2018. It is now read-only.

Commit 9735020

Browse files
author
Tong Zhang
committed
First commit
0 parents  commit 9735020

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+10089
-0
lines changed

CHANGES

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
---
2+
0.2 (Aug 2016)
3+
This is the first release. It only supports binary classification and regression, with significant simplifications from the original RGF algorithm for speed consideration. Additional functionalities will be supported in future releases.
4+
5+
6+
7+

CMakeLists.txt

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# CMakeLists file
2+
#
3+
cmake_minimum_required (VERSION 2.8.0)
4+
5+
project (FastRGF)
6+
7+
set(CMAKE_CXX_FLAGS "-O3 -std=c++11")
8+
9+
# you may need to use the following for g++-4.8
10+
#set(CMAKE_CXX_FLAGS "-O3 -std=c++11 -pthread")
11+
12+
#set(CMAKE_CXX_FLAGS "-g -std=c++11 -Wall")
13+
14+
include_directories(include)
15+
16+
17+
add_subdirectory(src/base)
18+
add_subdirectory(src/forest)
19+
add_subdirectory(src/exe)
20+
21+

LICENSE

+22
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
2+
The MIT License (MIT)
3+
Copyright (c) 2016 Baidu, Inc. All Rights Reserved.
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a
6+
copy of this software and associated documentation files (the "Software"),
7+
to deal in the Software without restriction, including without limitation
8+
the rights to use, copy, modify, merge, publish, distribute, sublicense,
9+
and/or sell copies of the Software, and to permit persons to whom the
10+
Software is furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in
13+
all copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
16+
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
18+
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21+
THE SOFTWARE.
22+

README.md

+71
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
----------
2+
# FastRGF
3+
### Multi-core implementation of Regularized Greedy Forest [RGF]
4+
5+
### Version 0.2 (August 2016) by Tong Zhang
6+
7+
---------
8+
#### 1. Introduction
9+
10+
This software package provides a multi-core implementation of a simplified Regularized Greedy Forest (RGF) described in **[RGF]**. Please cite the paper if you find the software useful.
11+
12+
RGF is a machine learning method for building decision forests that have been used to win some kaggle competitions. In our experience it works better than *gradient boosting* on many relatively large data.
13+
14+
The implementation employs the following conepts described in the **[RGF]** paper:
15+
16+
- tree node regularization
17+
- fully-corrective update
18+
- greedy node expansion with trade-off between leaf node splitting for current tree and root splitting for new tree
19+
20+
However, various simplifications are made to accelerate the training speed. Therefore, unlike the original RGF program (see <http://stat.rutgers.edu/home/tzhang/software/rgf/>), this software does not reproduce the results in the paper.
21+
22+
The implementation of greedy tree node optimization employs second order Newton approximation for general loss functions. For logistic regression loss, which works especially well for many binary classification problems, this approach was considered in **[PL]**; for general loss functions, 2nd order approximation was considered in **[ZCS]**.
23+
24+
#### 2. Installation
25+
Please see the file [CHANGES](CHANGES) for version information.
26+
The software is written in c++11, and it has been tested under linux and macos, and it may require g++ version 4.8 or above and cmake version 2.8 or above.
27+
28+
If you use *g++-4.8*, after running the exmaples, you may get error messages similar to the following:
29+
30+
terminate called after throwing an instance of 'std::system_error'
31+
what(): Enable multithreading to use std::thread: Operation not permitted
32+
33+
If this occurs, you need to add the **-pthread** flag in [CMakeLists.txt](CMakeLists.txt) to the variable CMAKE_CXX_FLAGS in order to enable multi-threading. This problem seems to be a bug in the g++ compiler. There may be variations of this problem specific to your system that require different fixes.
34+
35+
To install the binaries, unpackage the software into a directory.
36+
37+
* The source files are in the subdirectories include/ and src/.
38+
* The executables are under the subdirectory bin/.
39+
* The examples are under the subdirectory examples/.
40+
41+
To create the executables, do the following:
42+
43+
cd build/
44+
cmake ..
45+
make
46+
make install
47+
48+
The following executabels will be installed under the subdirectory bin/.
49+
50+
* forest_train: train rgf and save model
51+
* forest_predict: apply trained model on test data
52+
53+
You may use the option -h to show command-line options (options can also be provided in a configuration file).
54+
55+
#### 3. Examples
56+
Go to the subdirectory examples/, and following the instructions in [README.md](examples/README.md) (it also contains some tips for parameter tuning).
57+
58+
#### 4. Contact
59+
Tong Zhang
60+
61+
#### 5. Copyright
62+
The software is distributed under the MIT license. Please read the file [LICENSE](LICENSE).
63+
64+
#### 6. References
65+
66+
**[RGF]** Rie Johnson and Tong Zhang. [Learning Nonlinear Functions Using Regularized Greedy Forest](http://arxiv.org/abs/1109.0887), *IEEE Trans. on Pattern Analysis and Machine Intelligence, 36:942-954*, 2014.
67+
68+
**[PL]** Ping Li. Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost, *UAI* 2010.
69+
70+
**[ZCS]** Zhaohui Zheng, Hongyuan Zha, Tong Zhang, Olivier Chapelle, Keke Chen, Gordon Sun. A general boosting method and its application to learning ranking functions for web search, *NIPS* 2007.
71+

bin/.gitignore

+4
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# Ignore everything in this directory
2+
*
3+
# Except this file
4+
!.gitignore

build/.gitignore

+4
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# Ignore everything in this directory
2+
*
3+
# Except this file
4+
!.gitignore

examples/README.md

+34
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
### Examples
2+
---
3+
* ex1 This is a binary classification problem, in libsvm's sparse feature format.
4+
Use the *shell script* [run.sh](ex1/run.sh) to perform training/test.
5+
The dataset is downloaded from <https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#madelon>.
6+
7+
8+
* ex2: This is a regression problem, in dense feature format. Use the *shell script* [run.sh](ex2/run.sh) to perform training/test.
9+
The dataset is from <https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html#housing>.
10+
11+
12+
Note that for these small examples, the running time with multi-threads may be slower than with single-thread due to the overhead it introduces. However, for large datasets, one can observe an almost linear speed up.
13+
14+
The program can directly handle high dimensional sparse features in the libsvm format as in ex1. This is the recommended format to use when the dataset is relatively large (although some other formats are supported).
15+
16+
---
17+
### Tips for Parameter Tuning
18+
19+
There are multiple training parameters that can affect performance. The following are the more important ones:
20+
21+
* **dtree.loss**: default is LS, but for binary classificaiton, LOGISTIC often works better.
22+
* **forest.ntrees**: typical range is [100,10000], and a typical value is 1000.
23+
* **dtree.lamL2**: use a relatively large vale such as 1000 or 10000. The larger dtree.lamL2 is, the larger forest.ntrees you need to use: the resulting accuracy is often better with a longer training time.
24+
* **dtree.lamL1**: try values in [0,1000], and a large value induces sparsity.
25+
* **dtree.max_level** and **dtree.max_nodes** and **dtree.new_tree_gain_ratio**: these parameters control the tree depth and size (and when to start a new tree). One can try different values (such as dtree.max_level=4, or dtree.max_nodes=10, or dtree.new_tree_gain_ratio=0.5) to fine tuning performance.
26+
27+
You may also modify the discreitzation options below:
28+
29+
* **discretize.dense.max_buckets**: try in the range of [10,65000]
30+
* **discretize.sparse.max_buckets**: try in the range of [10, 250]. If you want to try a larger value up to 65000, then you need to edit [../include/header.h](../include/header.h) and replace
31+
"*using disc_sparse_value_t=unsigned char;*"
32+
by "*using disc_sparse_value_t=unsigned short;*". However, this increase the memory useage.
33+
* **discretize.sparse.max_features**: you may try a different value in [1000,10000000].
34+

0 commit comments

Comments
 (0)