Skip to content

Commit 5b79ea8

Browse files
authored
bugfix
* bugfix
1 parent db63fc6 commit 5b79ea8

18 files changed

+61
-46
lines changed

README.md

+14-6
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,9 @@
1313
[![Build Status](https://travis-ci.org/shenweichen/DeepCTR.svg?branch=master)](https://travis-ci.org/shenweichen/DeepCTR)
1414
[![Coverage Status](https://coveralls.io/repos/github/shenweichen/DeepCTR/badge.svg?branch=master)](https://coveralls.io/github/shenweichen/DeepCTR?branch=master)
1515
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/d4099734dc0e4bab91d332ead8c0bdd0)](https://www.codacy.com/app/wcshen1994/DeepCTR?utm_source=github.com&utm_medium=referral&utm_content=shenweichen/DeepCTR&utm_campaign=Badge_Grade)
16-
[![Gitter](https://badges.gitter.im/DeepCTR/community.svg)](https://gitter.im/DeepCTR/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
16+
[![Disscussion](https://img.shields.io/badge/chat-wechat-brightgreen?style=flat)](#Disscussion-Group)
1717
[![License](https://img.shields.io/github/license/shenweichen/deepctr.svg)](https://github.com/shenweichen/deepctr/blob/master/LICENSE)
18+
<!-- [![Gitter](https://badges.gitter.im/DeepCTR/community.svg)](https://gitter.im/DeepCTR/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) -->
1819

1920

2021
DeepCTR is a **Easy-to-use**,**Modular** and **Extendible** package of deep-learning based CTR models along with lots of core components layers which can be used to build your own custom model easily.It is compatible with **tensorflow 1.4+ and 2.0+**.You can use any complex model with `model.fit()`and `model.predict()` .
@@ -26,7 +27,7 @@ Let's [**Get Started!**](https://deepctr-doc.readthedocs.io/en/latest/Quick-Star
2627

2728
| Model | Paper |
2829
| :------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------- |
29-
| Convolutional Click Prediction Model | [CIKM 2015][A Convolutional Click Prediction Model](http://ir.ia.ac.cn/bitstream/173211/12337/1/A%20Convolutional%20Click%20Prediction%20Model.pdf) |
30+
| Convolutional Click Prediction Model | [CIKM 2015][A Convolutional Click Prediction Model](http://ir.ia.ac.cn/bitstream/173211/12337/1/A%20Convolutional%20Click%20Prediction%20Model.pdf) |
3031
| Factorization-supported Neural Network | [ECIR 2016][Deep Learning over Multi-field Categorical Data: A Case Study on User Response Prediction](https://arxiv.org/pdf/1601.02376.pdf) |
3132
| Product-based Neural Network | [ICDM 2016][Product-based neural networks for user response prediction](https://arxiv.org/pdf/1611.00144.pdf) |
3233
| Wide & Deep | [DLRS 2016][Wide & Deep Learning for Recommender Systems](https://arxiv.org/pdf/1606.07792.pdf) |
@@ -39,7 +40,14 @@ Let's [**Get Started!**](https://deepctr-doc.readthedocs.io/en/latest/Quick-Star
3940
| AutoInt | [arxiv 2018][AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks](https://arxiv.org/abs/1810.11921) |
4041
| Deep Interest Network | [KDD 2018][Deep Interest Network for Click-Through Rate Prediction](https://arxiv.org/pdf/1706.06978.pdf) |
4142
| Deep Interest Evolution Network | [AAAI 2019][Deep Interest Evolution Network for Click-Through Rate Prediction](https://arxiv.org/pdf/1809.03672.pdf) |
42-
| NFFM | [arxiv 2019][Operation-aware Neural Networks for User Response Prediction](https://arxiv.org/pdf/1904.12579.pdf) |
43-
| FGCNN | [WWW 2019][Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction ](https://arxiv.org/pdf/1904.04447) |
44-
| Deep Session Interest Network | [IJCAI 2019][Deep Session Interest Network for Click-Through Rate Prediction ](https://arxiv.org/abs/1905.06482) |
45-
| FiBiNET | [RecSys 2019][FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction](https://arxiv.org/pdf/1905.09433.pdf) |
43+
| NFFM | [arxiv 2019][Operation-aware Neural Networks for User Response Prediction](https://arxiv.org/pdf/1904.12579.pdf) |
44+
| FGCNN | [WWW 2019][Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction ](https://arxiv.org/pdf/1904.04447) |
45+
| Deep Session Interest Network | [IJCAI 2019][Deep Session Interest Network for Click-Through Rate Prediction ](https://arxiv.org/abs/1905.06482) |
46+
| FiBiNET | [RecSys 2019][FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction](https://arxiv.org/pdf/1905.09433.pdf) |
47+
48+
## Disscussion Group
49+
Please follow our wechat to join group:
50+
51+
![wechat](./docs/pics/weichennote.png)
52+
53+

deepctr/inputs.py

+5-4
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,8 @@ def create_embedding_matrix(feature_columns,l2_reg,init_std,seed,embedding_size,
141141
l2_reg, prefix=prefix + 'sparse',seq_mask_zero=seq_mask_zero)
142142
return sparse_emb_dict
143143

144-
def get_linear_logit(features, feature_columns, units=1, l2_reg=0, init_std=0.0001, seed=1024, prefix='linear'):
144+
def get_linear_logit(features, feature_columns, units=1, use_bias=False, init_std=0.0001, seed=1024, prefix='linear',
145+
l2_reg=0):
145146

146147
linear_emb_list = [input_from_feature_columns(features,feature_columns,1,l2_reg,init_std,seed,prefix=prefix+str(i))[0] for i in range(units)]
147148
_, dense_input_list = input_from_feature_columns(features,feature_columns,1,l2_reg,init_std,seed,prefix=prefix)
@@ -152,13 +153,13 @@ def get_linear_logit(features, feature_columns, units=1, l2_reg=0, init_std=0.00
152153
if len(linear_emb_list[0])>0 and len(dense_input_list) >0:
153154
sparse_input = concat_fun(linear_emb_list[i])
154155
dense_input = concat_fun(dense_input_list)
155-
linear_logit = Linear(l2_reg,mode=2)([sparse_input,dense_input])
156+
linear_logit = Linear(l2_reg,mode=2,use_bias=use_bias)([sparse_input,dense_input])
156157
elif len(linear_emb_list[0])>0:
157158
sparse_input = concat_fun(linear_emb_list[i])
158-
linear_logit = Linear(l2_reg,mode=0)(sparse_input)
159+
linear_logit = Linear(l2_reg,mode=0,use_bias=use_bias)(sparse_input)
159160
elif len(dense_input_list) >0:
160161
dense_input = concat_fun(dense_input_list)
161-
linear_logit = Linear(l2_reg,mode=1)(dense_input)
162+
linear_logit = Linear(l2_reg,mode=1,use_bias=use_bias)(dense_input)
162163
else:
163164
raise NotImplementedError
164165
linear_logit_list.append(linear_logit)

deepctr/layers/utils.py

+14-12
Original file line numberDiff line numberDiff line change
@@ -65,21 +65,24 @@ def get_config(self, ):
6565

6666
class Linear(tf.keras.layers.Layer):
6767

68-
def __init__(self, l2_reg=0.0, mode=0, **kwargs):
68+
def __init__(self, l2_reg=0.0, mode=0, use_bias=False, **kwargs):
6969

7070
self.l2_reg = l2_reg
7171
# self.l2_reg = tf.contrib.layers.l2_regularizer(float(l2_reg_linear))
72+
if mode not in [0,1,2]:
73+
raise ValueError("mode must be 0,1 or 2")
7274
self.mode = mode
75+
self.use_bias = use_bias
7376
super(Linear, self).__init__(**kwargs)
7477

7578
def build(self, input_shape):
76-
77-
self.bias = self.add_weight(name='linear_bias',
78-
shape=(1,),
79-
initializer=tf.keras.initializers.Zeros(),
80-
trainable=True)
81-
82-
self.dense = tf.keras.layers.Dense(units=1, activation=None, use_bias=False,
79+
if self.use_bias:
80+
self.bias = self.add_weight(name='linear_bias',
81+
shape=(1,),
82+
initializer=tf.keras.initializers.Zeros(),
83+
trainable=True)
84+
if self.mode != 0 :
85+
self.dense = tf.keras.layers.Dense(units=1, activation=None, use_bias=False,
8386
kernel_regularizer=tf.keras.regularizers.l2(self.l2_reg))
8487

8588
super(Linear, self).build(input_shape) # Be sure to call this somewhere!
@@ -92,15 +95,14 @@ def call(self, inputs , **kwargs):
9295
elif self.mode == 1:
9396
dense_input = inputs
9497
linear_logit = self.dense(dense_input)
95-
9698
else:
9799
sparse_input, dense_input = inputs
98100

99101
linear_logit = reduce_sum(sparse_input, axis=-1, keep_dims=False) + self.dense(dense_input)
102+
if self.use_bias:
103+
linear_logit += self.bias
100104

101-
linear_bias_logit = linear_logit + self.bias
102-
103-
return linear_bias_logit
105+
return linear_logit
104106

105107
def compute_output_shape(self, input_shape):
106108
return (None, 1)

deepctr/models/afm.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,8 @@ def AFM(linear_feature_columns, dnn_feature_columns, embedding_size=8, use_atten
4545
l2_reg_embedding, init_std,
4646
seed,support_dense=False)
4747

48-
linear_logit = get_linear_logit(features, linear_feature_columns, l2_reg=l2_reg_linear, init_std=init_std,
49-
seed=seed, prefix='linear')
48+
linear_logit = get_linear_logit(features, linear_feature_columns, init_std=init_std, seed=seed, prefix='linear',
49+
l2_reg=l2_reg_linear)
5050

5151
fm_input = concat_fun(sparse_embedding_list, axis=1)
5252
if use_attention:

deepctr/models/ccpm.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -48,8 +48,8 @@ def CCPM(linear_feature_columns, dnn_feature_columns, embedding_size=8, conv_ker
4848
sparse_embedding_list, _ = input_from_feature_columns(features,dnn_feature_columns,embedding_size,
4949
l2_reg_embedding, init_std,
5050
seed,support_dense=False)
51-
linear_logit = get_linear_logit(features, linear_feature_columns, l2_reg=l2_reg_linear, init_std=init_std,
52-
seed=seed)
51+
linear_logit = get_linear_logit(features, linear_feature_columns, init_std=init_std, seed=seed,
52+
l2_reg=l2_reg_linear)
5353

5454
n = len(sparse_embedding_list)
5555
l = len(conv_filters)

deepctr/models/deepfm.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -47,8 +47,8 @@ def DeepFM(linear_feature_columns, dnn_feature_columns, embedding_size=8, use_fm
4747
l2_reg_embedding,init_std,
4848
seed)
4949

50-
linear_logit = get_linear_logit(features, linear_feature_columns, l2_reg=l2_reg_linear, init_std=init_std,
51-
seed=seed, prefix='linear')
50+
linear_logit = get_linear_logit(features, linear_feature_columns, init_std=init_std, seed=seed, prefix='linear',
51+
l2_reg=l2_reg_linear)
5252

5353
fm_input = concat_fun(sparse_embedding_list, axis=1)
5454
fm_logit = FM()(fm_input)

deepctr/models/fibinet.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -55,8 +55,8 @@ def FiBiNET(linear_feature_columns, dnn_feature_columns, embedding_size=8, bilin
5555
bilinear_out = BilinearInteraction(
5656
bilinear_type=bilinear_type, seed=seed)(sparse_embedding_list)
5757

58-
linear_logit = get_linear_logit(features, linear_feature_columns, l2_reg=l2_reg_linear, init_std=init_std,
59-
seed=seed, prefix='linear')
58+
linear_logit = get_linear_logit(features, linear_feature_columns, init_std=init_std, seed=seed, prefix='linear',
59+
l2_reg=l2_reg_linear)
6060

6161
dnn_input = combined_dnn_input(
6262
[Flatten()(concat_fun([senet_bilinear_out, bilinear_out]))], dense_value_list)

deepctr/models/fnn.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,8 @@ def FNN(linear_feature_columns, dnn_feature_columns, embedding_size=8, dnn_hidde
4141
l2_reg_embedding,init_std,
4242
seed)
4343

44-
linear_logit = get_linear_logit(features, linear_feature_columns, l2_reg=l2_reg_linear, init_std=init_std,
45-
seed=seed, prefix='linear')
44+
linear_logit = get_linear_logit(features, linear_feature_columns, init_std=init_std, seed=seed, prefix='linear',
45+
l2_reg=l2_reg_linear)
4646

4747
dnn_input = combined_dnn_input(sparse_embedding_list,dense_value_list)
4848
deep_out = DNN(dnn_hidden_units, dnn_activation, l2_reg_dnn,

deepctr/models/mlr.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -60,14 +60,14 @@ def MLR(region_feature_columns, base_feature_columns=None, region_num=4,
6060

6161
def get_region_score(features,feature_columns, region_number, l2_reg, init_std, seed,prefix='region_',seq_mask_zero=True):
6262

63-
region_logit =concat_fun([get_linear_logit(features, feature_columns, l2_reg=l2_reg, init_std=init_std,
64-
seed=seed + i, prefix=prefix + str(i + 1)) for i in range(region_number)])
63+
region_logit =concat_fun([get_linear_logit(features, feature_columns, init_std=init_std, seed=seed + i,
64+
prefix=prefix + str(i + 1), l2_reg=l2_reg) for i in range(region_number)])
6565
return Activation('softmax')(region_logit)
6666

6767
def get_learner_score(features,feature_columns, region_number, l2_reg, init_std, seed,prefix='learner_',seq_mask_zero=True,task='binary'):
6868
region_score = [PredictionLayer(task=task,use_bias=False)(
69-
get_linear_logit(features, feature_columns, l2_reg=l2_reg, init_std=init_std, seed=seed + i,
70-
prefix=prefix + str(i + 1))) for i in
69+
get_linear_logit(features, feature_columns, init_std=init_std, seed=seed + i, prefix=prefix + str(i + 1),
70+
l2_reg=l2_reg)) for i in
7171
range(region_number)]
7272

7373
return concat_fun(region_score)

deepctr/models/nffm.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -53,8 +53,8 @@ def NFFM(linear_feature_columns, dnn_feature_columns, embedding_size=4, dnn_hidd
5353

5454
inputs_list = list(features.values())
5555

56-
linear_logit = get_linear_logit(features, linear_feature_columns, l2_reg=l2_reg_linear, init_std=init_std,
57-
seed=seed, prefix='linear')
56+
linear_logit = get_linear_logit(features, linear_feature_columns, init_std=init_std, seed=seed, prefix='linear',
57+
l2_reg=l2_reg_linear)
5858

5959
sparse_feature_columns = list(
6060
filter(lambda x: isinstance(x, SparseFeat), dnn_feature_columns)) if dnn_feature_columns else []

deepctr/models/nfm.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,8 @@ def NFM(linear_feature_columns, dnn_feature_columns, embedding_size=8, dnn_hidde
4444
l2_reg_embedding,init_std,
4545
seed)
4646

47-
linear_logit = get_linear_logit(features, linear_feature_columns, l2_reg=l2_reg_linear, init_std=init_std,
48-
seed=seed, prefix='linear')
47+
linear_logit = get_linear_logit(features, linear_feature_columns, init_std=init_std, seed=seed, prefix='linear',
48+
l2_reg=l2_reg_linear)
4949

5050
fm_input = concat_fun(sparse_embedding_list, axis=1)
5151
bi_out = BiInteractionPooling()(fm_input)

deepctr/models/wdl.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -43,8 +43,8 @@ def WDL(linear_feature_columns, dnn_feature_columns, embedding_size=8, dnn_hidde
4343
l2_reg_embedding, init_std,
4444
seed)
4545

46-
linear_logit = get_linear_logit(features, linear_feature_columns, l2_reg=l2_reg_linear, init_std=init_std,
47-
seed=seed, prefix='linear')
46+
linear_logit = get_linear_logit(features, linear_feature_columns, init_std=init_std, seed=seed, prefix='linear',
47+
l2_reg=l2_reg_linear)
4848

4949

5050
dnn_input = combined_dnn_input(sparse_embedding_list, dense_value_list)

deepctr/models/xdeepfm.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -50,8 +50,8 @@ def xDeepFM(linear_feature_columns, dnn_feature_columns, embedding_size=8, dnn_h
5050
l2_reg_embedding,init_std,
5151
seed)
5252

53-
linear_logit = get_linear_logit(features, linear_feature_columns, l2_reg=l2_reg_linear, init_std=init_std,
54-
seed=seed, prefix='linear')
53+
linear_logit = get_linear_logit(features, linear_feature_columns, init_std=init_std, seed=seed, prefix='linear',
54+
l2_reg=l2_reg_linear)
5555

5656
fm_input = concat_fun(sparse_embedding_list, axis=1)
5757

docs/pics/deepctrbot.jpeg

59.6 KB
Loading

docs/pics/deepctrbot.png

223 KB
Loading

docs/pics/weichennote.jpg

80.9 KB
Loading

docs/pics/weichennote.png

35.7 KB
Loading

docs/source/index.rst

+6-2
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ Welcome to DeepCTR's documentation!
2323
.. |Issues| image:: https://img.shields.io/github/issues/shenweichen/deepctr.svg
2424
.. _Issues: https://github.com/shenweichen/deepctr/issues
2525

26-
.. |Gitter| image:: https://badges.gitter.im/DeepCTR/community.svg
27-
.. _Gitter: https://gitter.im/DeepCTR/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge
26+
.. |Chat| image:: https://img.shields.io/badge/chat-wechat-brightgreen?style=flat
27+
.. _Gitter: ./#Disscussion-Group
2828

2929
DeepCTR is a **Easy-to-use** , **Modular** and **Extendible** package of deep-learning based CTR models along with lots of core components layer which can be used to build your own custom model easily.It is compatible with **tensorflow 1.4+ and 2.0+**.You can use any complex model with ``model.fit()`` and ``model.predict()``.
3030

@@ -40,6 +40,10 @@ News
4040

4141
08/02/2019 : Now DeepCTR is compatible with tensorflow `1.14` and `2.0.0`. `Changelog <https://github.com/shenweichen/DeepCTR/releases/tag/v0.6.0>`_
4242

43+
Disscussion Group
44+
-----
45+
image:: https://raw.githubusercontent.com/shenweichen/deepctr/master/docs/pics/weichennote.png?sanitize=true
46+
4347
.. toctree::
4448
:maxdepth: 2
4549
:caption: Home:

0 commit comments

Comments
 (0)