Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LibURL+LibWeb+UI: Remove the implicit URL constructors #3668

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

shannonbooth
Copy link
Contributor

All URLs are now either constucted through the URL Parser or by
default constructing a URL, and setting each of the fields of that
URL manually. This makes it much more difficult to create invalid
URLs to enable removing it's valid state.

This is the same type as what is spec'd. We cannot use a URL record
for this member as the spec in some scenarios will set and compare
the URL string to an invalid URL value, such as the empty string.

With implicit string constructors for the URL class removed
explicitly using URL::Parser::basic_parse makes the code look
quite silly in those places.
Instead of potentially passing through an invalid URL.
Instead of relying on the implicit URL constuctor. Parsing should
never fail here as urlString before adding the suffix is already
parsed above, and the suffix should only be a valid query string.
This is a bit weird in the spec in it passing through a string here
instead of a URL record. However, the string being used in this
case should only ever be a valid URL string if it is not the empty
string.
Removing one more caller of the implicit URL constructors.
All URLs are now either constucted through the URL Parser or by
default constructing a URL, and setting each of the fields of that
URL manually. This makes it much more difficult to create invalid
URLs.
@@ -521,14 +518,15 @@ ErrorOr<void> HTMLImageElement::update_the_image_data(bool restart_animations, b
// 1. Parse selected source, relative to the element's node document.
// If that is not successful, then abort this inner set of steps.
// Otherwise, let urlString be the resulting URL string.
auto url_string = document().parse_url(selected_source.value());
if (!url_string.has_value())
auto maybe_url = document().parse_url(selected_source.value());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only thing I don't love about this commit is the back-and-forth between strings and URLs. For example, here we:

  1. Parse a string to a URL
  2. Serialize that URL back to a string to store in the current request
  3. The current request re-parses the URL to create a shared resource request
  4. We then have to serialize URLs again later to compare to the stored string

Getting whiplash 😅

Would it make more sense to either:

  1. Keep storing a URL in the current request, but make it Optional<URL>. And we treat an empty string as OptionalNone.
  2. Cache the URL on the request, so we have both the string and URL readily available? Then make set_current_url take an optional URL parameter so we can pass in the already-parsed URL here.

Comment on lines +325 to +326
auto url = URL::Parser::basic_parse(resource.value()->file_url());
VERIFY(url.has_value());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing this a bunch makes me wonder if we should just have an explicit infallible URL parser

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants