Skip to content

Commit

Permalink
Removing package and move to vendor packages
Browse files Browse the repository at this point in the history
Remove Kafka and Kafka Manager to use kafka community release
Use upstream nozzle from Rakuten
  • Loading branch information
shinji62 committed Apr 19, 2018
1 parent 905aef4 commit 314d9e0
Show file tree
Hide file tree
Showing 33 changed files with 260 additions and 577 deletions.
6 changes: 6 additions & 0 deletions .final_builds/packages/golang-1.9-linux/index.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
builds:
200a29b129a9078cb156be44c4a792ae24f42900:
version: 200a29b129a9078cb156be44c4a792ae24f42900
blobstore_id: 77401338-e08b-49c4-6e98-d8d2456184fa
sha1: bacfa6a162fa7085f95df7d43e911404a3644b01
format-version: "2"
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
creds.yml
int.sh
results.bin
ci
test
Expand Down
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "src/kafka-firehose-nozzle/src/github.com/rakutentech/kafka-firehose-nozzle"]
path = src/kafka-firehose-nozzle/src/github.com/rakutentech/kafka-firehose-nozzle
url = [email protected]:rakutentech/kafka-firehose-nozzle.git
42 changes: 22 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Cloud Foundry loggregator firehose is fire and forget and can drop logs is you d
Most of the loggings system, like syslog, elasticsearch and so on are pretty slow and losting logs can happen especially in high traffic env.


So the idea is to use something fast enough to consumes the firehose stream without putting pression on the final consumer.
So the idea is to use something fast enough to consumes the firehose stream without putting pressure on the final consumer.


Using Kafka, we can easily consume the firehose stream and we can use as much consumer as we want on the same kafka deployment.
Expand Down Expand Up @@ -45,30 +45,26 @@ Component

#### kafka-firehose-nozzle
Simply forward message from the firehose to kafka.
For topics and partition information please go directly to the Github [repository](https://github.com/shinji62/kafka-firehose-nozzle).
For topics and partition information please go directly to the Github [repository](https://github.com/rakutentech/kafka-firehose-nozzle).

For now we install the nozzle on VM


### Kafka
Well Kafka....

Included in this bosh release Kafka v0.10.2.0


### Zookeeper
Well Zookeeper...
### Kafka / Zookeeper / Kafka-Manager

Not Include in this bosh release we use [Zookeeper-release](https://bosh.io/d/github.com/cppforlife/zookeeper-release)

[kafka-release](https://bosh.io/d/github.com/cppforlife/zookeeper-release)

Usage
-----

First upload zookeeper-release if you don't have it
First upload zookeeper, bpm and kafka release if you don't have it

```bash
$ bosh upload release https://bosh.io/d/github.com/cppforlife/zookeeper-release
$ bosh upload release https://bosh.io/d/github.com/cppforlife/zookeeper-release
$ bosh upload release https://bosh.io/d/github.com/cloudfoundry-incubator/bpm-release
```

Then upload this bosh release
Expand All @@ -77,22 +73,28 @@ Then upload this bosh release
$ bosh upload release https://github.com/shinji62/cflogs-boshrelease
```

Please check the deployment sample manifest [templates/bosh-lite.yml](./templates/bosh-lite.yml)


Then just deployment
Deploy

Please adapt to your own usage
```
//Bosh-old-cli
bosh deploy
bosh deploy -d cflogs ./manifest/manifest.yml \
--vars-store ./creds.yml \
-o ./manifest/ops-files/kafka-all-in-one.yml \
-o ./manifest/ops-files/use-latest-version.yml \
-o ./manifest/ops-files/cf-skip-ssl.yml \
-v subscription_id=pcfdev \
-v nozzle_client_id=firehose-to-kafka \
-v nozzle_client_secret=foobar \
-v doppler_address=doppler_address \
-v uaa_address=uaa_address \
-v subscription_id=pcfdev \
-v deployment_name=cflogs \
-v kafka_persistent_disk=10240 --no-redact
//Bosh cli v2
bosh2 deploy -e yourenv -d cflogs-boshlite ./templates/bosh-lite.yml -n
```



TODO
====
* Test test and test
* Concourse Pipeline
28 changes: 18 additions & 10 deletions jobs/kafka-firehose-nozzle/spec
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,13 @@ templates:

packages:
- cflogs-utils
- golang1.8
- kafka-firehose-nozzle

consumes:
- name: conn
type: kafka
- name: kafka
type: conn
properties:
- port
- listen_port


properties:
Expand Down Expand Up @@ -48,15 +47,24 @@ properties:
kafka.retry_backoff:
default: 500
description: "Producer Retry backoff in millisecond. Default 500ms"
kafka.topic.relabel.log-message:
kafka.compression:
default: none
description: "Message Compression (none, snappy, gzip)"
kafka.topic.log-message:
default: "LogMessage"
description: "Changing topic name (default LogMessage)"
kafka.topic.relabel.container-metric:
kafka.topic.container-metric:
default: "ContainerMetric"
description: "Changing topic name (default ContainerMetric)"
kafka.topic.relabel.value-metric:
kafka.topic.value-metric:
default: "ValueMetric"
description: "Changing topic name (default ValueMetric)"
kafka.topic.relabel.counter-event:
kafka.topic.counter-event:
default: CounterEvent
description: "Changing topic name (default CounterEvent)"
kafka.topic.relabel.http-start-stop:
kafka.topic.http-start-stop:
default: "HttpStartStop"
description: "Changing topic name (default HttpStartStop)"
kafka.topic.relabel.error:
kafka.topic.error:
default: "Error"
description: "Changing topic name (default Error)"
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ password = "<%= p('cf.client_secret') %>"
[kafka]
#The list of kafka brokers IP
<%
kafkabrokers_ip = link("conn").instances.map do |instance|
"\"#{instance.address}:#{link("conn").p('port')}\""
kafkabrokers_ip = link("kafka").instances.map do |instance|
"\"#{instance.address}:#{link("kafka").p('listen_port')}\""
end.join(',')
%>
brokers = [<%=kafkabrokers_ip%>]
Expand All @@ -41,11 +41,14 @@ brokers = [<%=kafkabrokers_ip%>]
retry_max = <%= p('kafka.retry_max') %>
retry_backoff_ms = <%= p('kafka.retry_backoff') %>


# Message compression "none" or "snappy", "gzip"
compression = "<%= p('kafka.compression') %>"

[kafka.topic]
[kafka.topic.relabel]
<% if_p('kafka.topic.relabel.log-message') do |prop| %>log_message = "<%=prop%>"<%end%>
<% if_p('kafka.topic.relabel.container-metric') do |prop| %> %>container_metric= "<%=prop%>"<%end%>
<% if_p('kafka.topic.relabel.value-metric') do |prop| %>%>value_metric= "<%=prop%>"<%end%>
<% if_p('kafka.topic.relabel.counter-event') do |prop| %> %>counter_event= "<%=prop%>"<%end%>
<% if_p('kafka.topic.relabel.http-start-stop') do |prop| %>%>http_start_stop= "<%=prop%>"<%end%>
<% if_p('kafka.topic.relabel.error') do |prop| %> %>error= "<%=prop%>"<%end%>
log_message = "<%= p('kafka.topic.log-message') %>"
value_metric = "<%= p('kafka.topic.value-metric') %>"
container_metric = "<%= p('kafka.topic.container-metric') %>"
http_start_stop = "<%= p('kafka.topic.http-start-stop') %>"
counter_event = "<%= p('kafka.topic.counter-event') %>"
error = "<%= p('kafka.topic.error') %>"
5 changes: 0 additions & 5 deletions jobs/kafka-manager/monit

This file was deleted.

25 changes: 0 additions & 25 deletions jobs/kafka-manager/spec

This file was deleted.

49 changes: 0 additions & 49 deletions jobs/kafka-manager/templates/bin/kafka_manager_ctl.erb

This file was deleted.

44 changes: 0 additions & 44 deletions jobs/kafka-manager/templates/config/application.conf.erb

This file was deleted.

This file was deleted.

5 changes: 0 additions & 5 deletions jobs/kafka/monit

This file was deleted.

71 changes: 0 additions & 71 deletions jobs/kafka/spec

This file was deleted.

Loading

0 comments on commit 314d9e0

Please sign in to comment.