
CLEVER: A Curated Benchmark for Formally Verified Code Generation
Jul 8, 2025 · TL;DR: We introduce CLEVER, a hand-curated benchmark for verified code generation in Lean. It requires full formal specs and proofs. No few-shot method solves all …
Submissions | OpenReview
Jan 22, 2025 · Leaving the barn door open for Clever Hans: Simple features predict LLM benchmark answers Lorenzo Pacchiardi, Marko Tesic, Lucy G Cheke, Jose Hernandez-Orallo …
STAIR: Improving Safety Alignment with Introspective Reasoning
May 1, 2025 · One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can …
Do Histopathological Foundation Models Eliminate Batch Effects?
Oct 12, 2024 · Keywords: histopathology, foundation models, batch effects, Clever Hans effect, robustness, generalization Abstract: Deep learning has led to remarkable advancements in …
Contrastive Learning Via Equivariant Representation - OpenReview
Sep 26, 2024 · TL;DR: This paper proposes CLeVER, a novel equivariant-based contrastive learning framework that improves training efficiency and robustness in downstream tasks by …
Dual-Model Defense: Safeguarding Diffusion Models from …
Sep 27, 2024 · Membership inference and memorization is a key challenge with diffusion models. Mitigating such vulnerabilities is hence an important topic. The idea of using an ensemble of …
Evaluating the Robustness of Neural Networks: An Extreme Value...
Feb 15, 2018 · Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is …
La RoSA: Enhancing LLM Efficiency via Layerwise Rotated Sparse...
May 1, 2025 · We use a clever technique that involves rotating the data within each layer of the model, making it easier to identify and keep only the most important parts for processing. This …
Right on Time: Revising Time Series Models by Constraining their...
Sep 27, 2024 · The reliability of deep time series models is often compromised by their tendency to rely on confounding factors, which may lead to incorrect outputs. Our newly recorded, …
Provably Mitigating Overoptimization in RLHF: Your SFT Loss is...
Jun 19, 2024 · With a clever usage of the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines (i) a …