Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: update to go-log@v2 #8765

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

schomatis
Copy link
Contributor

@schomatis schomatis commented Mar 4, 2022

Need review and help in the flagged items in core/corehttp/logs.go.

Closes #8753.
Closes #9245

@schomatis schomatis requested review from lidel and aschmahmann March 4, 2022 20:58
@schomatis schomatis self-assigned this Mar 4, 2022
Comment on lines +55 to +75
// FIXME(BLOCKING): This is a brittle solution and needs careful review.
// Ideally we should use an io.Pipe or similar, but in contrast
// with go-log@v1 where the driver was an io.Writer, here the log
// comes from an io.Reader, and we need to constantly read from it
// and then write to the HTTP response.
pipeReader := logging.NewPipeReader()
b := new(bytes.Buffer)
go func() {
for {
// FIXME: We are not handling read errors.
// FIXME: We may block on read and not catch the context
// cancellation.
b.ReadFrom(pipeReader)
b.WriteTo(wnf)
select {
case <-r.Context().Done():
return
default:
}
}
}()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blocking: need help.

Comment on lines +77 to +79
// FIXME(BLOCKING): Verify this call replacing the `Event` API
// which has been removed in go-log v2.
log.Debugf("log API client connected")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blocking: need help.

@BigLep BigLep modified the milestones: Best Effort Track, go-ipfs 0.13 Mar 8, 2022
@BigLep BigLep requested review from guseggert and removed request for aschmahmann and lidel April 1, 2022 16:27
@BigLep
Copy link
Contributor

BigLep commented Apr 7, 2022

2022-04-07: we need to see if this has been superseded by go-libp2p work.

@Jorropo Jorropo self-requested a review April 12, 2022 17:24
// FIXME: We are not handling read errors.
// FIXME: We may block on read and not catch the context
// cancellation.
b.ReadFrom(pipeReader)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a potential unbounded memory growth.

If we implement things like that we should use Read, Write loop with a fixed size buffer.

b := new(bytes.Buffer)
go func() {
for {
// FIXME: We are not handling read errors.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error seems handling seems easy enough.

An error while reading should be sent to the client and terminate the connection.
And an error while writing probably means the client disconnected, so just terminate the connection.

lwriter.WriterGroup.AddWriter(wnf)
log.Event(n.Context(), "log API client connected") //nolint deprecated

// FIXME(BLOCKING): This is a brittle solution and needs careful review.
Copy link
Contributor

@Jorropo Jorropo Apr 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That code is running in circles doing useless things:

https://github.com/ipfs/go-log/blob/8625e3ec81bdeb96627de192e6fe21eab5896603/pipe.go#L50-L58

	r, w := io.Pipe()

  p := &PipeReader{
  	r:      r,
  	closer: w,
  	core:   newCore(opt.format, zapcore.AddSync(w), opt.level),
  }

Zap wants to write to an io.Writer and we have an io.Writer but we are doing zap -(Write)> io.Pipe -(Read)> Read Write Repeat Loop -(Write)> HTTP Writer.

You should add support in go-log@v2 to add a writer into the core pool.
The only issue I see with that is that the syncronous nature of it could make scalling really slow (as it would write logs synchronously one by one).
And someone could do a slow loris attack and stuck the IPFS node.

The first approach we can say is it's the API and if you are so slow that you manage to slow your IPFS node it's your fault. With API access you could nuke the config anyway and we can't be expected to protect people from themselves.

If that really an issue we could implement a simple asynchronous buffered IO wrapper, that would cut off if there is too many buffered data.
Would be easy to do with channels (have write append to a channel and a goroutine read from that channel and forward the data to Write, you could also implement a custom ring buffer to coalles multiple reads). And since you are using channels in the copy loop, you can select on both the data input and the context done.

zap -(Write)> asyncBuffer -(sendDataOverChannelWithSelectOverAContextToo)> copyLoop -(Write)> HTTP

If I was unclear just ask and I'll write that part, I already know what to do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like an issue that should be raised and addressed in go-log, not here. I agree that things should be better on that side.

select {
case <-r.Context().Done():
return
default:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That code is a potential CPU spinner if you have a non blocking reader being returned. (which currently is not the case, currently this is an io.Pipe which blocks if there is no data).

Just one more reason to make the correct thing (either just enroll it in the log framework, or make a custom asynchronous buffering thing and enroll that).

@Jorropo
Copy link
Contributor

Jorropo commented Apr 13, 2022

@schomatis you should also rebase on master too, a few things have changed, but AFAIK not the main blocking point.

@BigLep
Copy link
Contributor

BigLep commented Apr 21, 2022

@schomatis : are you going to make the corresponding change/improvement in go-lobv2?

@schomatis
Copy link
Contributor Author

@BigLep I'm confused on exactly what those changes should be. That's why I requested an issue to be created with the scoped changes needed to land this PR (if those are indeed a blocker here; I didn't completely understood the feedback here).

@Jorropo
Copy link
Contributor

Jorropo commented Apr 21, 2022

Ok, I'll open an issue on go-log

@BigLep
Copy link
Contributor

BigLep commented Apr 26, 2022

2022-04-26 conversation:

  1. For 0.13: do nothing (in the release notes for 0.13, "if you were using log-tail for certain things, you'll need to look at the daemon input. Previously we had some output in daemon logs and some log-tail.")
  2. For 0.14 we're going to make log-tail work as desired.

lidel added a commit that referenced this pull request Apr 26, 2022
Ensuring people are aware the RPC API/CMD may change
Context: #8765 (comment)
@schomatis
Copy link
Contributor Author

@BigLep Note that ipfs log tail can be fixed pretty easily, independent of the entire "switch the whole code base to log@v2". It's just the file diff https://github.com/ipfs/go-ipfs/pull/8765/files#diff-8ecc58f41eaa3971752dc974dd21fd9c1ecfdac1c15e9e9c40dc6fa4283a8d4e.

That can land in the next release if desired.

lidel added a commit that referenced this pull request Apr 28, 2022
Ensuring people are aware the RPC API/CMD may change
Context: #8765 (comment)
lidel added a commit that referenced this pull request Apr 28, 2022
Ensuring people are aware the RPC API/CMD may change
Context: #8765 (comment)
@lidel lidel modified the milestones: go-ipfs 0.13, go-ipfs 0.14 Jun 19, 2022
@BigLep BigLep modified the milestones: kubo 0.14, kubo 0.15 Jul 22, 2022
@BigLep BigLep removed this from the kubo 0.15 milestone Jul 22, 2022
@BigLep
Copy link
Contributor

BigLep commented Jul 26, 2022

@Jorropo : is there an issue in https://github.com/ipfs/go-log to improve things there?

@guseggert guseggert added the P1 High: Likely tackled by core team if no one steps up label Oct 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P1 High: Likely tackled by core team if no one steps up
Projects
No open projects
Status: 🥞 Todo
Development

Successfully merging this pull request may close these issues.

"ipfs log tail" doesn't log anything Switch to go-log v2
5 participants