-
Notifications
You must be signed in to change notification settings - Fork 708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nc.Request speed test #517
Comments
After testing a little, I noticed a very strange behavior.
801ns
366ns
6.823769ms
977ns send 30000000 bytes = 801ns It was the same result. If some number when the nuts starts to work slowly. try send buffer 300000 bytes |
You should run these in a loop and take the avg at least. Running a single one will show variability. For instance with just a single one the first Request() we spin up the global response subscription. For the 6ms anomaly my bet is that is GC. Run test with GC off and see if you get it again. |
2019/09/12 02:52:54 -856ns
2019/09/12 02:53:31 -8.127566ms loop test, ok I true off GC |
GOGC=off
2019/09/12 02:56:36 -481ns
2019/09/12 02:57:47 -6.761753ms magic |
Can you share the whole program in a gist? Also what version of Go? What server version? |
You have a problem since you are doing time.Since(time.Now()) and don't see where you use |
go version go1.12.9 darwin/amd64
|
time.Since(startTime) same speed result
|
Sorry, so you do start time - current time, it is a bit strange, but ok. |
only bug 300000 size not 30000 not 3000000 it is very strange for me to see. |
nats bug demo on demo.nats.io
|
Are you using master for nats.go or a release version? |
master I re get master and try now same bug
2019/09/12 03:21:51 989.007998ms |
Publish() is async. There is no guarantee of anything happening after you call Publish. Also you should check the error returned by Publish. There is a maximum payload for the demo server (and all NATS servers default to this) of 1MB. So anything over that is failing. If you want to see how long it takes to have the publish processed by the server. Try this. nc, err := nats.Connect("demo.nats.io")
...
rttStart := time.Now()
nc.Flush()
rtt := time.Since(rttStart)
start := time.Now()
err := nc.Publish(buf)
nc.Flush()
fmt.Printf("time to publish is %v\n", time.Since(start) - rtt) I did not check code but should be close.. |
go get github.com/nats-io/nats.go/ |
I download zip release 1.8 bug more stale big time
I drop folder and re download master same problem
if I change size 30000 or 3000000 no bug only size = 300000 |
I try build same bug go build t2.go |
@deepch there is also a latency framework here: https://github.com/nats-io/latency-tests |
@deepch Thank you. I will investigate this tomorrow and get back to you. I do see it too and there is probably an explanation. Will keep you posted. |
Thank you very much for the prompt and quality support. |
Just to reiterate on what @derekcollison mentioned, the case with |
Is this function not nc.Publish blocking? Do I have to get the result so much different in time? The only question is this. 200000 demo.nats.io
200000 local net
I think that time should not be so big, especially on the local network. |
Publish() is async ok let's say I have a loop in which delay is important, and if I call a function that causes large delays, it will slow down, of course I can write a pool and process these tasks, but doesn't that do it internally? I understand that the delivery guarantee is important, but there is no way to send it to me faster, the guarantee is not important. |
NATS is well suited for high speed messaging. Using our demo server to do performance tests probably not the best idea. Publish() can flush the buffers if you are overflowing them, but usually is totally async and let's another Go routine flush to the underlying socket. Also have you tried the latency framework that @wallyqs pointed you to? And finally, what is it that you are trying to achieve? |
I test now I want to send a large number of big messages without blocking. defaultBufSize = 32768
I try change it to defaultBufSize = 300000 * 2 result
the result is fully stable. I think for my message size the buffer is too small. Thank you so much for your help. |
Interesting, thanks for that info. I believe we depend on a bufio that when you send something bigger it just goes through and I guess does a send in place. Make sure you are measuring not just the Publish() call which now will always be async with your larger buffer to all the messages being processed by the server. There is also a good benchmark program here that you can use.. |
Thank you, I will investigate this in detail. Publish() always be async - this is exactly what I wanted to achieve |
What I am saying those is if Publish() is always async but your application takes another Nms to actually move the messages out of your app to the server, not sure that makes sense TBH. Eventually you have to get the messages out. |
I understand that we are talking about transmission over the network and also understand the differences between asynс and no blocking Non-blocking sending is what I need. my problem if I use
beter for me
|
I understand, and for short bursts that is ok. But think about it. Let's say you can send to network at 100 (arbitrary), but you are trying to send messages that are 1000 big. Eventually you have to wait, or you have your application grow in memory usage until its killed by OOM. Again you can do that in small bursts, but if its sustained, something has to give if there is a mismatch. |
But what prevents me from checking the queue for size before sending?
|
I am saying the Publish() as it exists today in the Go client is essentially doing that now. There is a Pending() API as well IIRC. |
Pending I think it only works for Subscribe |
Tell me how it works to answer. Can I read all to answer in the channel for customer requirements? fist step I send message to topic nc.Publish("test", buf) I can somehow get this t.Respond if I do not make a request ? |
Apologies I am not following. Are you asking what the code to do that would look like? |
//client
//server
I want the channels not only to the subscription. I want to send data through a channel. |
Back to original report:
If you think about it, a Request() call does this:
The other aspect is that if the Publish() calls happens not to do the actual tcp send, then it can be very fast, but if the buffer is full and needs flushing or if the flusher go routine is doing the tcp send (under the connection lock that the publish() call needs), then the time needed will be higher. |
try send big data make([]byte, 900000) -> defaultBufSize = 32768 2019/09/13 04:36:39 -20.820528ms try send big data make([]byte, 900000) -> defaultBufSize = 900000 * 2 2019/09/13 04:38:10 -118.705µs Publish != Request here I think the delay is not justified by the high. |
When you make the buffer bigger you are just using internal memory of the app to adjust for the send speed of your application and the speed of the network. |
To the latest version
ping host time=1.111 ms speed 1000 mbps
test
node 1
node 2
times := time.Now()
res, err := nc.Request("test", []byte("hello"), time.Second*1)
log.Println(string(res.Data), err, times.Sub(time.Now()))
result 4.064834ms I think this is not very fast
test big data
times := time.Now()
buf := make([]byte, 100000)
res, err := nc.Request("test", buf, time.Second*1)
log.Println(string(res.Data), err, times.Sub(time.Now()))
100 kib
18.530951ms
I think it's not fast ?
The text was updated successfully, but these errors were encountered: