techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

5.3K
active users

#beautifulsoup

0 posts0 participants0 posts today

#python and #BeautifulSoup is why I ❤️ #LibreOffice!

# This belongs in Scripts/python/securities.py in securities.ods/

import uno
import unohelper
from com.sun.star.lang import Locale
from com.sun.star.awt import Rectangle
from com.sun.star.table import CellRangeAddress

from datetime import datetime, timedelta

import os
import sys
import ssl
from bs4 import BeautifulSoup
import json
import urllib.request
import re
from collections import defaultdict

doc = XSCRIPTCONTEXT.getDocument()
...

Replied in thread

@aziz This is the offending code. You can see there's nothing there, it extracts one div. (200 files, each no more than 40k, takes six seconds. Has to be a bug?):

p = []
for html in glob('*.html'):
with open(html) as fp:
soup = BeautifulSoup(fp, features='html.parser')
pmap = soup.find('div', 'prod-contact')
p.append(pmap)

Been reading a lot of articles about how AI is killing the job market for junior devs. They say most companies will have a handful of senior devs managing it and no one else

I want to work in tech, but there's practically nothing for junior devs out there and those get snapped up by young new grad geniuses with whom I can't compete.

With those senior devs, I wonder who will run things when they move on.

If you have a job for a NEW GRAD. I'd like to see it. Please don't make me regret my masters degree forever. It was one of the hardest things I've ever done.

Please dont let me try to make a living as a writer, if you think rejection sensitivity is bad for job interviews... Wait til you pitch a literary agent

I've been working on a #python webscraping #data collection app using #beautifulsoup library and pulling out #mariadb data through a #metabase report for local non-permanent #housing in my state of #washington, specifically #graysharbor county.

I have a public report avail now over at reports.hogaboom.org/dashboard

Code is on #github github.com/ralphhogaboom/chick

reports.hogaboom.orgMetabase

Kennt sich hier jemand mit WebScraping aus?

Ich will alle Urls von dieser Seite bekommen, dafür muss man ein paar mal den ›Weitere Ergebnisse laden‹-Button unten drücken. Ich versuche es mit #Python (cfscraper/requests und #BeautifulSoup), aber ich bekomme die richtige POST-Anfrage nicht hin, um alle die Seite mit allen Ergebnissen zu haben.

Jemand Ideen?

Hier ist die Seite: neubaukompass.de/neubau-immobi

Udpate: Hab das Problem gelöst, hab die POST-Anfrage aus Firefox übernommen.