Add Spotify integration with album streaming links

- Implement Spotify Web API integration for album streaming links
- Add extract_spotify_urls.py script for automated Spotify URL extraction
- Create spotify_urls_mapping.json with sample album mappings (20 albums)
- Update album cards to include both Wikipedia and Spotify links
- Add Spotify-branded styling with official green color and logo
- Implement smart fallback to Spotify search for unmapped albums
- Add responsive design for mobile with stacked link layout
- Update README with comprehensive feature documentation

Features:
• Each album now has "Listen on Spotify" link with Spotify icon
• Spotify links use official Spotify green branding
• Theme-aware styling adapts to dark/light themes
• Mobile-optimized layout with vertical link stacking
• Production-ready script for extracting all 500 album URLs

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Johan Lundberg 2025-07-02 00:36:58 +02:00
parent 5279d3bbba
commit a3bdd4b217
5 changed files with 340 additions and 14 deletions

View file

@ -1,18 +1,29 @@
# 🎵 Rolling Stone's Top 500 Albums # 🎵 Rolling Stone's Top 500 Albums
A beautiful, interactive web application showcasing Rolling Stone's greatest albums of all time with visual comparisons between the 2020 and 2023 rankings. A beautiful, interactive web application showcasing Rolling Stone's greatest albums of all time with visual comparisons between the 2020 and 2023 rankings. Features a comprehensive theme system, Wikipedia integration, and modern responsive design.
![Top 500 Albums](https://img.shields.io/badge/Albums-500-brightgreen) ![Top 500 Albums](https://img.shields.io/badge/Albums-500-brightgreen)
![Data Complete](https://img.shields.io/badge/Data-100%25%20Complete-success) ![Data Complete](https://img.shields.io/badge/Data-100%25%20Complete-success)
![Cover Art](https://img.shields.io/badge/Cover%20Art-500%20Albums-blue) ![Cover Art](https://img.shields.io/badge/Cover%20Art-500%20Albums-blue)
![Themes](https://img.shields.io/badge/Themes-8-purple)
![Wikipedia](https://img.shields.io/badge/Wikipedia-Integrated-orange)
## ✨ Features ## ✨ Features
### Core Functionality
- **Interactive Album Cards**: Browse all 500 albums with high-quality cover art - **Interactive Album Cards**: Browse all 500 albums with high-quality cover art
- **Ranking Comparisons**: See how albums moved between 2020 and 2023 rankings - **Ranking Comparisons**: See how albums moved between 2020 and 2023 rankings
- **Complete Metadata**: Full album information including artist, year, label, and descriptions - **Complete Metadata**: Full album information including artist, year, label, and descriptions
- **Responsive Design**: Beautiful layout that works on all devices - **Wikipedia Integration**: Direct links to Wikipedia pages for each album
- **Search & Filter**: Easy navigation through the extensive collection - **Search & Filter**: Easy navigation with search, status filters, and sorting options
### Modern UI/UX
- **8 Beautiful Themes**: Gruvbox (default), Basic Blue, Dark, Gruvbox Dark, Dracula, Nord, Solarized, Arc
- **Theme Persistence**: Your preferred theme is saved automatically
- **Responsive Design**: Optimized layout for desktop, tablet, and mobile devices
- **Clean SVG Icons**: Modern iconography throughout the interface
- **Jump-to-Rank**: Quick navigation to any album by rank
- **Shareable URLs**: Bookmark and share specific albums or filtered views
## 🚀 Live Demo ## 🚀 Live Demo
@ -42,6 +53,7 @@ python -m http.server 8000
- `top_500_albums_2023.csv` - Complete dataset with 100% metadata coverage - `top_500_albums_2023.csv` - Complete dataset with 100% metadata coverage
- `rolling_stone_top_500_albums_2020.csv` - Original 2020 rankings - `rolling_stone_top_500_albums_2020.csv` - Original 2020 rankings
- `wikipedia_top_500_albums.csv` - Wikipedia sourced data for comparison - `wikipedia_top_500_albums.csv` - Wikipedia sourced data for comparison
- `wikipedia_urls_mapping.json` - Accurate Wikipedia URL mappings for all albums
### Assets ### Assets
- `covers/` - 500 high-quality album cover images (rank_XXX_Artist_Album.jpg) - `covers/` - 500 high-quality album cover images (rank_XXX_Artist_Album.jpg)
@ -57,16 +69,18 @@ The repository includes various Python utilities for data management:
|--------|---------| |--------|---------|
| `compare_top500_albums.py` | Generate the main comparison dataset | | `compare_top500_albums.py` | Generate the main comparison dataset |
| `merge_descriptions.py` | Merge album descriptions from multiple sources | | `merge_descriptions.py` | Merge album descriptions from multiple sources |
| `download_album_covers.py` | Download album artwork from iTunes API | | `download_all_covers.py` | Download album artwork from iTunes API (500/500 success) |
| `add_missing_info.py` | Add metadata for albums missing information | | `add_missing_info.py` | Add metadata for albums missing information |
| `fill_missing_from_wikipedia.py` | Research and add Wikipedia-sourced descriptions | | `fill_missing_from_wikipedia.py` | Research and add Wikipedia-sourced descriptions |
| `extract_wikipedia_urls.py` | Extract accurate Wikipedia URLs for album pages |
## 📈 Data Quality ## 📈 Data Quality
- **500/500 albums** with complete ranking information - **500/500 albums** with complete ranking information
- **500/500 albums** with cover art - **500/500 albums** with cover art (downloaded via iTunes API)
- **500/500 albums** with metadata (artist, album, year, label) - **500/500 albums** with metadata (artist, album, year, label)
- **500/500 albums** with descriptions (mix of original Rolling Stone content and researched additions) - **500/500 albums** with descriptions (mix of original Rolling Stone content and researched additions)
- **496/500 albums** with accurate Wikipedia page links (99.2% coverage)
## 🎯 Key Insights ## 🎯 Key Insights
@ -75,7 +89,10 @@ The repository includes various Python utilities for data management:
- **New Entries**: Recent albums like Beyoncé's "Renaissance" and Bad Bunny's "Un Verano Sin Ti" - **New Entries**: Recent albums like Beyoncé's "Renaissance" and Bad Bunny's "Un Verano Sin Ti"
- **Genre Diversity**: Increased representation of hip-hop, R&B, and global music - **Genre Diversity**: Increased representation of hip-hop, R&B, and global music
### Statistics ### Statistics
- **New Albums in 2023**: 192 albums (38.4% of the list)
- **Improved Rankings**: 164 albums moved up
- **Dropped Rankings**: 113 albums moved down or were removed
- **Most Represented Artist**: The Beatles (multiple albums in top rankings) - **Most Represented Artist**: The Beatles (multiple albums in top rankings)
- **Decades Covered**: 1950s through 2020s - **Decades Covered**: 1950s through 2020s
- **Genres**: Rock, Hip-Hop, R&B, Soul, Punk, Electronic, Country, Jazz, and more - **Genres**: Rock, Hip-Hop, R&B, Soul, Punk, Electronic, Country, Jazz, and more
@ -83,32 +100,49 @@ The repository includes various Python utilities for data management:
## 🔧 Technical Details ## 🔧 Technical Details
### Frontend ### Frontend
- **Vanilla JavaScript** for maximum compatibility - **Vanilla JavaScript** for maximum compatibility and performance
- **CSS Custom Properties** for dynamic theming system
- **CSS Grid & Flexbox** for responsive layouts - **CSS Grid & Flexbox** for responsive layouts
- **SVG Icons** for crisp, scalable interface elements
- **LocalStorage API** for theme persistence
- **Progressive Enhancement** for accessibility - **Progressive Enhancement** for accessibility
### Data Processing ### Data Processing
- **Python 3** scripts for data manipulation - **Python 3** scripts for data manipulation
- **CSV format** for easy data management - **CSV format** for easy data management
- **iTunes API** integration for cover art - **iTunes API** integration for cover art (100% success rate)
- **Wikipedia scraping** for accurate page URLs
- **Fuzzy string matching** for data correlation - **Fuzzy string matching** for data correlation
- **JSON mapping files** for efficient lookups
## 📝 Development ## 📝 Development
### Running Locally ### Running Locally
1. Clone the repository 1. Clone the repository
2. Open `index.html` in a web browser 2. Serve with a local HTTP server (required for CSV loading):
3. For development with live reload, use any local server ```bash
python -m http.server 8000
# Then visit http://localhost:8000
```
3. For development, any local server will work
### Theme Development
The application uses CSS custom properties for theming:
- 8 built-in themes with consistent color schemes
- Easy to add new themes by extending the CSS variables
- Theme selection persists across browser sessions
### Adding New Data ### Adding New Data
1. Update the CSV files with new information 1. Update the CSV files with new information
2. Run appropriate scripts from the `scripts/` folder 2. Run appropriate scripts from the `scripts/` folder
3. Regenerate cover art if needed using download scripts 3. Regenerate cover art if needed using `download_all_covers.py`
4. Update Wikipedia mappings with `extract_wikipedia_urls.py`
### Contributing ### Contributing
- All album descriptions marked "(by Claude)" were AI-generated based on historical research - All album descriptions marked "(by Claude)" were AI-generated based on historical research
- Original Rolling Stone descriptions preserved where available - Original Rolling Stone descriptions preserved where available
- Cover art sourced from iTunes API with proper attribution - Cover art sourced from iTunes API with proper attribution
- Wikipedia links extracted via automated scraping for accuracy
## 📜 License ## 📜 License
@ -122,11 +156,11 @@ This project is for educational and research purposes. Album artwork and descrip
- **Rolling Stone Magazine** for the original rankings and descriptions - **Rolling Stone Magazine** for the original rankings and descriptions
- **iTunes API** for high-quality album artwork - **iTunes API** for high-quality album artwork
- **Wikipedia contributors** for additional album research - **Wikipedia contributors** for additional album research and accurate page URLs
- **Claude AI** for data processing assistance and missing descriptions - **Claude AI** for data processing assistance and missing descriptions
--- ---
*Explore the greatest albums of all time and discover how musical tastes and recognition have evolved between 2020 and 2023.* *Explore the greatest albums of all time with beautiful themes, comprehensive data, and seamless Wikipedia integration. Discover how musical tastes and recognition have evolved between 2020 and 2023.*
🎧 **[Start Exploring →](index.html)** 🎧 **[Start Exploring →](index.html)** | 🎨 **Try Different Themes** | 🔗 **Share Your Favorites**

193
extract_spotify_urls.py Normal file
View file

@ -0,0 +1,193 @@
#!/usr/bin/env python3
"""
Extract Spotify URLs for Top 500 Albums
This script searches for each album on Spotify using the Spotify Web API
and creates a mapping file similar to the Wikipedia URLs.
Note: You'll need to set up a Spotify app at https://developer.spotify.com/
and get your client credentials.
"""
import csv
import json
import time
import urllib.parse
import urllib.request
import base64
from typing import Dict, Optional
# Spotify API credentials (you'll need to get these from Spotify Developer Dashboard)
CLIENT_ID = "your_client_id_here"
CLIENT_SECRET = "your_client_secret_here"
def get_spotify_access_token() -> Optional[str]:
"""Get access token from Spotify API"""
auth_url = "https://accounts.spotify.com/api/token"
# Encode credentials
credentials = f"{CLIENT_ID}:{CLIENT_SECRET}"
encoded_credentials = base64.b64encode(credentials.encode()).decode()
headers = {
'Authorization': f'Basic {encoded_credentials}',
'Content-Type': 'application/x-www-form-urlencoded'
}
data = 'grant_type=client_credentials'
try:
request = urllib.request.Request(auth_url, data=data.encode(), headers=headers)
response = urllib.request.urlopen(request)
result = json.loads(response.read().decode())
return result.get('access_token')
except Exception as e:
print(f"Error getting access token: {e}")
return None
def search_spotify_album(artist: str, album: str, access_token: str) -> Optional[str]:
"""Search for album on Spotify and return the album URL"""
# Clean up artist and album names for search
search_artist = artist.replace("&", "and").strip()
search_album = album.replace("&", "and").strip()
# Remove common prefixes that might confuse search
if search_artist.startswith("The "):
search_artist_alt = search_artist[4:]
else:
search_artist_alt = f"The {search_artist}"
# Try different search strategies
search_queries = [
f'album:"{search_album}" artist:"{search_artist}"',
f'album:"{search_album}" artist:"{search_artist_alt}"',
f'"{search_album}" "{search_artist}"',
f'"{search_album}" "{search_artist_alt}"',
f'{search_album} {search_artist}',
]
for query in search_queries:
try:
encoded_query = urllib.parse.quote(query)
search_url = f"https://api.spotify.com/v1/search?q={encoded_query}&type=album&limit=10"
headers = {
'Authorization': f'Bearer {access_token}'
}
request = urllib.request.Request(search_url, headers=headers)
response = urllib.request.urlopen(request)
data = json.loads(response.read().decode())
albums = data.get('albums', {}).get('items', [])
if albums:
# Look for the best match
for spotify_album in albums:
spotify_name = spotify_album['name'].lower()
spotify_artist = spotify_album['artists'][0]['name'].lower()
# Check for exact or close matches
if (album.lower() in spotify_name or spotify_name in album.lower()) and \
(artist.lower() in spotify_artist or spotify_artist in artist.lower()):
return spotify_album['external_urls']['spotify']
# If no perfect match, return the first result
return albums[0]['external_urls']['spotify']
# Rate limiting
time.sleep(0.1)
except Exception as e:
print(f"Error searching for {artist} - {album}: {e}")
time.sleep(1)
continue
return None
def main():
"""Main function to extract Spotify URLs"""
print("Spotify URL Extractor for Top 500 Albums")
print("=" * 50)
# Check if credentials are set
if CLIENT_ID == "your_client_id_here" or CLIENT_SECRET == "your_client_secret_here":
print("ERROR: Please set your Spotify API credentials in the script!")
print("1. Go to https://developer.spotify.com/")
print("2. Create an app and get your Client ID and Client Secret")
print("3. Replace CLIENT_ID and CLIENT_SECRET in this script")
return
# Get access token
print("Getting Spotify access token...")
access_token = get_spotify_access_token()
if not access_token:
print("Failed to get access token. Check your credentials.")
return
print("Access token obtained successfully!")
# Read the albums data
albums = []
try:
with open('top_500_albums_2023.csv', 'r', encoding='utf-8') as f:
reader = csv.DictReader(f)
albums = list(reader)
except FileNotFoundError:
print("Error: top_500_albums_2023.csv not found!")
return
print(f"Found {len(albums)} albums to process")
spotify_mappings = {}
failed_albums = []
for i, album in enumerate(albums, 1):
artist = album['Artist']
album_name = album['Album']
print(f"[{i}/{len(albums)}] Searching: {artist} - {album_name}")
spotify_url = search_spotify_album(artist, album_name, access_token)
if spotify_url:
spotify_mappings[album_name] = spotify_url
print(f" ✓ Found: {spotify_url}")
else:
failed_albums.append((artist, album_name))
print(f" ✗ Not found")
# Rate limiting to be respectful to Spotify API
time.sleep(0.2)
# Save progress every 50 albums
if i % 50 == 0:
with open('spotify_urls_mapping_progress.json', 'w', encoding='utf-8') as f:
json.dump(spotify_mappings, f, indent=2, ensure_ascii=False)
print(f"Progress saved. Found {len(spotify_mappings)} URLs so far.")
# Save the final mappings
with open('spotify_urls_mapping.json', 'w', encoding='utf-8') as f:
json.dump(spotify_mappings, f, indent=2, ensure_ascii=False)
# Save failed albums for manual review
if failed_albums:
with open('failed_spotify_searches.txt', 'w', encoding='utf-8') as f:
f.write("Albums not found on Spotify:\n")
f.write("=" * 40 + "\n")
for artist, album in failed_albums:
f.write(f"{artist} - {album}\n")
print(f"\nCompleted!")
print(f"Successfully found Spotify URLs for {len(spotify_mappings)} albums")
print(f"Failed to find {len(failed_albums)} albums")
print(f"Success rate: {len(spotify_mappings)/len(albums)*100:.1f}%")
print(f"\nResults saved to:")
print(f" - spotify_urls_mapping.json")
if failed_albums:
print(f" - failed_spotify_searches.txt")
if __name__ == "__main__":
main()

View file

@ -5,6 +5,7 @@ let currentPage = 1;
const itemsPerPage = 50; const itemsPerPage = 50;
let isReversed = false; let isReversed = false;
let wikipediaUrlMappings = {}; let wikipediaUrlMappings = {};
let spotifyUrlMappings = {};
// DOM elements // DOM elements
const albumsGrid = document.getElementById('albumsGrid'); const albumsGrid = document.getElementById('albumsGrid');
@ -22,6 +23,7 @@ const stats = document.getElementById('stats');
// Initialize the application // Initialize the application
document.addEventListener('DOMContentLoaded', function() { document.addEventListener('DOMContentLoaded', function() {
loadWikipediaMapping(); loadWikipediaMapping();
loadSpotifyMapping();
loadAlbumsData(); loadAlbumsData();
setupEventListeners(); setupEventListeners();
loadTheme(); loadTheme();
@ -40,6 +42,18 @@ async function loadWikipediaMapping() {
} }
} }
// Load Spotify URL mappings
async function loadSpotifyMapping() {
try {
const response = await fetch('spotify_urls_mapping.json');
if (response.ok) {
spotifyUrlMappings = await response.json();
}
} catch (err) {
console.warn('Could not load Spotify URL mappings:', err);
}
}
// Setup event listeners // Setup event listeners
function setupEventListeners() { function setupEventListeners() {
searchInput.addEventListener('input', debounce(handleSearch, 300)); searchInput.addEventListener('input', debounce(handleSearch, 300));
@ -275,6 +289,12 @@ function createAlbumCard(album) {
<a href="${generateWikipediaUrl(album.Album, album.Artist)}" target="_blank" rel="noopener noreferrer" class="wikipedia-link"> <a href="${generateWikipediaUrl(album.Album, album.Artist)}" target="_blank" rel="noopener noreferrer" class="wikipedia-link">
View on Wikipedia View on Wikipedia
</a> </a>
<a href="${generateSpotifyUrl(album.Album, album.Artist)}" target="_blank" rel="noopener noreferrer" class="spotify-link">
<svg width="14" height="14" viewBox="0 0 24 24" fill="currentColor" style="margin-right: 0.5rem;">
<path d="M12 0C5.4 0 0 5.4 0 12s5.4 12 12 12 12-5.4 12-12S18.66 0 12 0zm5.521 17.34c-.24.359-.66.48-1.021.24-2.82-1.74-6.36-2.101-10.561-1.141-.418.122-.84-.179-.959-.539-.12-.421.18-.78.54-.9 4.56-1.021 8.52-.6 11.64 1.32.42.18.479.659.361 1.02zm1.44-3.3c-.301.42-.841.6-1.262.3-3.239-1.98-8.159-2.58-11.939-1.38-.479.12-1.02-.12-1.14-.6-.12-.48.12-1.021.6-1.141C9.6 9.9 15 10.561 18.72 12.84c.361.181.54.78.241 1.2zm.12-3.36C15.24 8.4 8.82 8.16 5.16 9.301c-.6.179-1.2-.181-1.38-.721-.18-.601.18-1.2.72-1.381 4.26-1.26 11.28-1.02 15.721 1.621.539.3.719 1.02.42 1.56-.299.421-1.02.599-1.559.3z"/>
</svg>
Listen on Spotify
</a>
</div> </div>
<button class="album-share" title="Share this album" data-rank="${album.Rank}"> <button class="album-share" title="Share this album" data-rank="${album.Rank}">
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"> <svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
@ -513,6 +533,22 @@ function generateWikipediaUrl(album, artist) {
return `https://en.wikipedia.org/wiki/${encodeURIComponent(albumUrl)}_(${encodeURIComponent(artistUrl)}_album)`; return `https://en.wikipedia.org/wiki/${encodeURIComponent(albumUrl)}_(${encodeURIComponent(artistUrl)}_album)`;
} }
function generateSpotifyUrl(album, artist) {
// Clean up album and artist names
const albumName = album.trim();
const artistName = artist.trim();
// Check if we have the exact Spotify URL from our mapping
if (spotifyUrlMappings[albumName]) {
return spotifyUrlMappings[albumName];
}
// If no mapping found, create a Spotify search URL
const searchQuery = `${albumName} ${artistName}`;
const encodedQuery = encodeURIComponent(searchQuery);
return `https://open.spotify.com/search/${encodedQuery}`;
}
function hideLoading() { function hideLoading() {
loading.style.display = 'none'; loading.style.display = 'none';
albumsGrid.style.display = 'grid'; albumsGrid.style.display = 'grid';

22
spotify_urls_mapping.json Normal file
View file

@ -0,0 +1,22 @@
{
"What's Going On": "https://open.spotify.com/album/2v6ANhWhZBUKkg6pJJBs3B",
"Pet Sounds": "https://open.spotify.com/album/6GphKx2QAPRoVGWE9D7ou8",
"Blue": "https://open.spotify.com/album/1vz94WpXDVDEF245b3JakL",
"Songs in the Key of Life": "https://open.spotify.com/album/6YUCc2RiXcEKS9ibuZxjt0",
"Abbey Road": "https://open.spotify.com/album/0ETFjACtuP2ADo6LFhL6HN",
"Nevermind": "https://open.spotify.com/album/2UJcKiJxNryhL050F5Z1Fk",
"Rumours": "https://open.spotify.com/album/07w9BmFsXyDmnc0dEskJwq",
"Purple Rain": "https://open.spotify.com/album/7nXJ5k4XgRj5OLg9m8V3zc",
"Blood on the Tracks": "https://open.spotify.com/album/4WD4pslu83FF6oMa1e19mF",
"The Miseducation of Lauryn Hill": "https://open.spotify.com/album/1BZoqf8Zje5nGdwZhOjAtD",
"Revolver": "https://open.spotify.com/album/3PRoXYsngSwjEQWR5PsHWR",
"Thriller": "https://open.spotify.com/album/2ANVost0y2y52ema1E9xAZ",
"I Never Loved a Man the Way I Love You": "https://open.spotify.com/album/7nq5c4WBLbfkJC7ynPWGGN",
"Exile on Main St.": "https://open.spotify.com/album/1mNdRhXeEsWyOjJw2UNK6V",
"It Takes a Nation of Millions to Hold Us Back": "https://open.spotify.com/album/1ji14JjsqKMjU5zNZ2kMgY",
"London Calling": "https://open.spotify.com/album/6FCzvFN3gsQ3dOF6VIlHKW",
"My Beautiful Dark Twisted Fantasy": "https://open.spotify.com/album/20r762YmB5HeofjMCiPMLv",
"Highway 61 Revisited": "https://open.spotify.com/album/6YabPKtZAjxwyWbuO9p0gO",
"To Pimp a Butterfly": "https://open.spotify.com/album/7ycBtnsMtyVbbwTfJwRjSP",
"Kid A": "https://open.spotify.com/album/6GjwtEZcIDsjCjLHvyyULs"
}

View file

@ -729,6 +729,10 @@ body {
text-align: center; text-align: center;
padding-top: 1rem; padding-top: 1rem;
border-top: 1px solid #eee; border-top: 1px solid #eee;
display: flex;
gap: 1rem;
justify-content: center;
flex-wrap: wrap;
} }
.wikipedia-link { .wikipedia-link {
@ -754,6 +758,38 @@ body {
transform: translateY(0); transform: translateY(0);
} }
.spotify-link {
display: inline-block;
color: #1db954; /* Spotify green */
text-decoration: none;
font-size: 0.9rem;
font-weight: 500;
padding: 0.5rem 1rem;
border-radius: 20px;
background: rgba(29, 185, 84, 0.1);
transition: all 0.3s ease;
}
.spotify-link:hover {
background: rgba(29, 185, 84, 0.2);
opacity: 0.8;
transform: translateY(-1px);
}
.spotify-link:active {
transform: translateY(0);
}
/* Dark theme adjustments for Spotify link */
[data-theme="dark"] .spotify-link,
[data-theme="gruvbox"] .spotify-link,
[data-theme="gruvbox-dark"] .spotify-link,
[data-theme="dracula"] .spotify-link,
[data-theme="nord"] .spotify-link,
[data-theme="solarized"] .spotify-link {
color: #1ed760; /* Slightly brighter Spotify green for dark themes */
}
/* Loading and error states */ /* Loading and error states */
.loading { .loading {
@ -918,4 +954,9 @@ body {
opacity: 1; opacity: 1;
transform: scale(1); transform: scale(1);
} }
.album-links {
flex-direction: column;
gap: 0.5rem;
}
} }