-
Notifications
You must be signed in to change notification settings - Fork 13.6k
Rollup of 15 pull requests #144398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rollup of 15 pull requests #144398
Conversation
The previous manual parsing of `serde_json::Value` was a lot of complicated code and extremely error-prone. It was full of janky behavior like sometimes ignoring type errors, sometimes erroring for type errors, sometimes warning for type errors, and sometimes just ICEing for type errors (the icing on the top). Additionally, many of the error messages about allowed values were out of date because they were in a completely different place than the FromStr impls. Overall, the system caused confusion for users. I also found the old deserialization code annoying to read. Whenever a `key!` invocation was found, one had to first look for the right macro arm, and no go to definition could help. This PR replaces all this manual parsing with a 2-step process involving serde. First, the string is parsed into a `TargetSpecJson` struct. This struct is a 1:1 representation of the spec JSON. It already parses all the enums and is very simple to read and write. Then, the fields from this struct are copied into the actual `Target`. The reason for this two-step process instead of just serializing into a `Target` is because of a few reasons 1. There are a few transformations performed between the two formats 2. The default logic is implemented this way. Otherwise all the default field values would have to be spelled out again, which is suboptimal. With this logic, they fall out naturally, because everything in the json struct is an `Option`. Overall, the mapping is pretty simple, with the vast majority of fields just doing a 1:1 mapping that is captured by two macros. I have deliberately avoided making the macros generic to keep them simple. All the `FromStr` impls now have the error message right inside them, which increases the chance of it being up to date. Some "`from_str`" impls were turned into proper `FromStr` impls to support this. The new code is much less involved, delegating all the JSON parsing logic to serde, without any manual type matching. This change introduces a few breaking changes for consumers. While it is possible to use this format on stable, it is very much subject to change, so breaking changes are expected. The hope is also that because of the way stricter behavior, breaking changes are easier to deal with, as they come with clearer error messages. 1. Invalid types now always error, everywhere. Previously, they would sometimes error, and sometimes just be ignored (which meant the users JSON was still broken, just silently!) 2. This now makes use of `deny_unknown_fields` instead of just warning on unused fields, which was done previously. Serde doesn't make it easy to get such warning behavior, which was the primary reason that this now changed. But I think error behavior is very reasonable too. If someone has random stale fields in their JSON, it is likely because these fields did something at some point but no longer do, and the user likely wants to be informed of this so they can figure out what to do. This is also relevant for the future. If we remove a field but someone has it set, it probably makes sense for them to take a look whether they need this and should look for alternatives, or whether they can just delete it. Overall, the JSON is made more explicit. This is the only expected breakage, but there could also be small breakage from small mistakes. All targets roundtrip though, so it can't be anything too major.
…nsion This function can cause false negatives if used incorrectly (usually "do any of the doc fragments come from a macro" is the wrong question to ask), and thus it is unused.
- Use EFI_TCP4_GET_MODE_DATA to be able to query for ttl, nodelay, peer_addr and socket_addr. - peer_addr is needed for implementation of `accept`. Signed-off-by: Ayush Singh <ayush@beagleboard.org>
…petrochenkov Unquerify extern_mod_stmt_cnum. Based on rust-lang#143247 r? `````@ghost````` for perf
…=tgross35 std: net: uefi: Add support to query connection data - Use EFI_TCP4_GET_MODE_DATA to be able to query for ttl, nodelay, peer_addr and socket_addr. - peer_addr is needed for implementation of `accept`. - cc `````@nicholasbishop````` - Also a heads up. The UEFI spec seems to be wrong or something for [EFI_TCP4_CONFIG_DATA](https://uefi.org/specs/UEFI/2.11/28_Network_Protocols_TCP_IP_and_Configuration.html#efi-tcp4-protocol-getmodedata). `ControlOption` should be a pointer as seen in [edk2](https://github.com/tianocore/edk2/blob/a1b509c1a453815acbc6c8b0fc5882fd03a6f2c0/MdePkg/Include/Protocol/Tcp4.h#L97).
…bank don't link to the nightly version of the Edition Guide in stable lints As reported in rust-lang#143557 for `rust_2024_incompatible_pat`, most future-Edition-incompatibility lints link to the nightly version of the Edition Guide; the lints were written before their respective Editions (and their guides) stabilized. But now that Rusts 2021 and 2024 are stable, these lints are emitted on stable versions of the compiler, where it makes more sense to present users with links that don't say "nightly" in them. This does not change the link for `rust_2024_incompatible_pat`. That's handled in rust-lang#144006.
…trochenkov Ensure we codegen the main fn This fixes two bugs. The one that was identified in the linked issue is that when we have a `main` function, mono collection didn't consider it as an extra collection root. The other is that since CGU partitioning doesn't know about the call edges between the entrypoint functions, naively it can put them in different CGUs and mark them all as internal. Which would result in LLVM just deleting all of them. There was an existing hack to exclude `lang = "start"` from internalization, which I've extended to include `main`. Fixes rust-lang#144052
…, r=fee1-dead Use serde for target spec json deserialize The previous manual parsing of `serde_json::Value` was a lot of complicated code and extremely error-prone. It was full of janky behavior like sometimes ignoring type errors, sometimes erroring for type errors, sometimes warning for type errors, and sometimes just ICEing for type errors (the icing on the top). Additionally, many of the error messages about allowed values were out of date because they were in a completely different place than the FromStr impls. Overall, the system caused confusion for users. I also found the old deserialization code annoying to read. Whenever a `key!` invocation was found, one had to first look for the right macro arm, and no go to definition could help. This PR replaces all this manual parsing with a 2-step process involving serde. First, the string is parsed into a `TargetSpecJson` struct. This struct is a 1:1 representation of the spec JSON. It already parses all the enums and is very simple to read and write. Then, the fields from this struct are copied into the actual `Target`. The reason for this two-step process instead of just serializing into a `Target` is because of a few reasons 1. There are a few transformations performed between the two formats 2. The default logic is implemented this way. Otherwise all the default field values would have to be spelled out again, which is suboptimal. With this logic, they fall out naturally, because everything in the json struct is an `Option`. Overall, the mapping is pretty simple, with the vast majority of fields just doing a 1:1 mapping that is captured by two macros. I have deliberately avoided making the macros generic to keep them simple. All the `FromStr` impls now have the error message right inside them, which increases the chance of it being up to date. Some "`from_str`" impls were turned into proper `FromStr` impls to support this. The new code is much less involved, delegating all the JSON parsing logic to serde, without any manual type matching. This change introduces a few breaking changes for consumers. While it is possible to use this format on stable, it is very much subject to change, so breaking changes are expected. The hope is also that because of the way stricter behavior, breaking changes are easier to deal with, as they come with clearer error messages. 1. Invalid types now always error, everywhere. Previously, they would sometimes error, and sometimes just be ignored (which meant the users JSON was still broken, just silently!) 2. This now makes use of `deny_unknown_fields` instead of just warning on unused fields, which was done previously. Serde doesn't make it easy to get such warning behavior, which was the primary reason that this now changed. But I think error behavior is very reasonable too. If someone has random stale fields in their JSON, it is likely because these fields did something at some point but no longer do, and the user likely wants to be informed of this so they can figure out what to do. This is also relevant for the future. If we remove a field but someone has it set, it probably makes sense for them to take a look whether they need this and should look for alternatives, or whether they can just delete it. Overall, the JSON is made more explicit. This is the only expected breakage, but there could also be small breakage from small mistakes. All targets roundtrip though, so it can't be anything too major. fixes rust-lang#144153
generate elf symbol version in raw-dylib For link names like `aaa@bbb`, it generates a symbol named `aaa` and a version named `bbb`. For link names like `aaa\0bbb`, `aaa@`@bbb`` or `aa@bb@cc`, it emits errors. It adds a test that the executable is linked with glibc using raw-dylib. cc rust-lang#135694
…ported-in-another-issue, r=fee1-dead Add more test case to check if the false note related to sealed trait suppressed Closes rust-lang#143121 I started to fix the issue but I found that this one has already been addressed in this PR (rust-lang#143431). I added an additional test to prove the reported thing has been resolved just in case. I think we can discard this pull request if there's no need to add such kind of tests👍🏻
coretests/num: use ldexp instead of hard-coding a power of 2 r? `````@tgross35`````
…enkov Use less HIR in check_private_in_public. r? ````````@petrochenkov````````
add Rev::into_inner Tracking issue: rust-lang#144277
…r=Kobzol pass build.npm from bootstrap to tidy and use it for npm install followup to rust-lang#142924 r? ```@Kobzol```
…BTreeMap-str, r=GuillaumeGomez rustdoc: avoid allocating a temp String for aliases in search index Here's the optimization I talked about in rust-lang#143988 (comment) I got around the Serialize issue using the newtype pattern. The wrapper type could be factored out into a helper that would work with anything that impls `AsRef<&str>`, but I'm not sure if that would be helpful anywhere else. r? ``````@GuillaumeGomez``````
…ents-revert, r=GuillaumeGomez rustc_resolve: get rid of unused rustdoc::span_of_fragments_with_expansion This function can cause false negatives if used incorrectly (usually "do any of the doc fragments come from a macro" is the wrong question to ask), and thus it is unused. r? `````@GuillaumeGomez`````
…wLii Don't suggest assoc ty bound on non-angle-bracketed problematic assoc ty binding Fixes rust-lang#140543.
…ormed, r=oli-obk Stop using the old `validate_attr` logic for stability attributes I think this was accidentally missed when implementing the stability attributes? r? `````@oli-obk````` cc `````@jdonszelmann`````
@bors r+ p=5 rollup=never |
No checks 🙄 thanks GH! |
@bors r+ |
💡 This pull request was already approved, no need to approve it again.
|
☀️ Test successful - checks-actions |
What is this?This is an experimental post-merge analysis report that shows differences in test outcomes between the merged PR and its parent PR.Comparing 5d22242 (parent) -> 246733a (this PR) Test differencesShow 21 test diffsStage 1
Stage 2
Additionally, 7 doctest diffs were found. These are ignored, as they are noisy. Job group index
Test dashboardRun cargo run --manifest-path src/ci/citool/Cargo.toml -- \
test-dashboard 246733a3d978de41c5b77b8120ba8f41592df9f1 --output-dir test-dashboard And then open Job duration changes
How to interpret the job duration changes?Job durations can vary a lot, based on the actual runner instance |
📌 Perf builds for each rolled up PR:
previous master: 5d22242a3a In the case of a perf regression, run the following command for each PR you suspect might be the cause: |
Finished benchmarking commit (246733a): comparison URL. Overall result: ❌✅ regressions and improvements - please read the text belowOur benchmarks found a performance regression caused by this PR. Next Steps:
@rustbot label: +perf-regression Instruction countOur most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
Max RSS (memory usage)Results (primary -1.6%, secondary 2.1%)A less reliable metric. May be of interest, but not used to determine the overall result above.
CyclesResults (primary -2.2%, secondary -0.4%)A less reliable metric. May be of interest, but not used to determine the overall result above.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 466.913s -> 469.69s (0.59%) |
Successful merges:
validate_attr
logic for stability attributes #144358 (Stop using the oldvalidate_attr
logic for stability attributes)r? @ghost
@rustbot modify labels: rollup
Create a similar rollup