Search Results for author: Douglas Leith

Found 10 papers, 1 papers with code

Attack Detection Using Item Vector Shift in Matrix Factorisation Recommenders

no code implementations1 Dec 2023 Sulthana Shams, Douglas Leith

This paper proposes a novel method for detecting shilling attacks in Matrix Factorization (MF)-based Recommender Systems (RS), in which attackers use false user-item feedback to promote a specific item.

Recommendation Systems

High Accuracy and Low Regret for User-Cold-Start Using Latent Bandits

no code implementations12 May 2023 David Young, Douglas Leith

We develop a novel latent-bandit algorithm for tackling the cold-start problem for new users joining a recommender system.

Recommendation Systems

Bandit Convex Optimisation Revisited: FTRL Achieves $\tilde{O}(t^{1/2})$ Regret

no code implementations1 Feb 2023 David Young, Douglas Leith, George Iosifidis

We show that a kernel estimator using multiple function evaluations can be easily converted into a sampling-based bandit estimator with expectation equal to the original kernel estimate.

Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction

no code implementations30 Oct 2022 Mohamed Suliman, Douglas Leith

We illustrate the effectiveness of the attacks against the next word prediction model used in Google's GBoard app, a widely used mobile keyboard app that has been an early adopter of federated learning for production use.

Federated Learning

Online Caching with no Regret: Optimistic Learning via Recommendations

no code implementations20 Apr 2022 Naram Mhaisen, George Iosifidis, Douglas Leith

We build upon the Follow-the-Regularized-Leader (FTRL) framework, which is developed further here to include predictions for the file requests, and we design online caching algorithms for bipartite networks with pre-reserved or dynamic storage subject to time-average budget constraints.

Edge-computing

Online Caching with Optimistic Learning

1 code implementation22 Feb 2022 Naram Mhaisen, George Iosifidis, Douglas Leith

The design of effective online caching policies is an increasingly important problem for content distribution networks, online social networks and edge computing services, among other areas.

Edge-computing

Lazy Online Gradient Descent is Universal on Polytopes

no code implementations3 Apr 2020 Daron Anderson, Douglas Leith

We prove the familiar Lazy Online Gradient Descent algorithm is universal on polytope domains.

Optimality of the Subgradient Algorithm in the Stochastic Setting

no code implementations10 Sep 2019 Daron Anderson, Douglas Leith

We show that the Subgradient algorithm is universal for online learning on the simplex in the sense that it simultaneously achieves $O(\sqrt N)$ regret for adversarial costs and $O(1)$ pseudo-regret for i. i. d costs.

Cannot find the paper you are looking for? You can Submit a new open access paper.