question_id
int64
59.5M
79.7M
creation_date
stringdate
2020-01-01 00:00:00
2025-07-15 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,365,034
2025-1-17
https://stackoverflow.com/questions/79365034/how-to-recycle-list-to-build-a-new-column
How can I create the type column recycling a two-elements list ["lat","lon"]? adresse coord type "place 1" 48.943837 lat "place 1" 2.387917 lon "place 2" 37.843837 lat "place 2" 6.387917 lon As it would be automatically done in R with d$type <- c("lat","lon") Reprex : d0 = pl.DataFrame( { "adresse": ["p...
pl.int_range() and pl.len() to create a "row number". pl.Expr.over() to do it within adresse column. ( d0.explode("coord") .with_columns( type = pl.int_range(pl.len()).over("adresse") ) ) shape: (4, 3) ┌─────────┬───────────┬──────┐ │ adresse ┆ coord ┆ type │ │ --- ┆ --- ┆ --- │ │ str ┆ f64 ┆ i64 │ ╞═════════╪══════...
5
3
79,364,336
2025-1-17
https://stackoverflow.com/questions/79364336/how-to-get-a-python-function-to-work-on-an-np-array-or-a-float-with-conditional
I have a function that I'd like to take numpy arrays or floats as input. I want to keep doing an operation until some measure of error is less than a threshold. A simple example would be the following to divide a number or array by 2 until it's below a threshold (if a float), or until it's maximum is below a threshold ...
What about checking the type of input inside the function and adapt it ? def f(x): # for float or np.array if type(x) is float: x = x * np.ones(1) while np.max(x)>1e-5: x = x/2 return x
2
2
79,364,619
2025-1-17
https://stackoverflow.com/questions/79364619/how-to-write-a-cell-from-an-if-statement
I'm learning Python (using MU) and trying to implement it in Excel (with Python in Excel), however I have found some difficulties which I can't understand: Cell values are (just an example): C1= 1 C2= 3 D1: i1=xl("C1") #gives value 1 D2: i2=xl("C2") #gives value 3 F1: i3=0 #i3 is defined, just to see what is happenin...
TL;DR Python in Excel will only evaluate the last expression or assignment as output. Option 1: Add i3 to the code block after the if statement as the last expression to be evaluated. if i1 < i2: i3 = i1 + i2 else: i3 = i1 * i2 i3 # add Option 2: Use a conditional expression. i3 = i1 + i2 if i1 < i2 else i1 * i2 Opti...
4
6
79,364,611
2025-1-17
https://stackoverflow.com/questions/79364611/errormessage-nonetype-object-is-not-iterable
its an aws lambda function try: with urllib.request.urlopen(api_url) as response: data=json.loads(response.read().decode()) print(json.dumps(data, indent = 4 )) except Exception as e : print(f"error reading data : {e} ") return {"statuscode":"500","body":"Error fetching data"} games_messages = [game_format(game) for g...
The issue is not originating from the code block you provided, rather it is originating from line 15: quarter_scores = ', '.join([f"Q{q['Number']}: {q.get('AwayScore', 'N/A')}-{q.get('HomeScore', 'N/A')}" for q in quarters]) Error message states quarters is None. Indeed, in the data you fetched, quarters is null, (jso...
2
3
79,364,551
2025-1-17
https://stackoverflow.com/questions/79364551/how-to-subtract-pd-dataframegroupby-objects-from-each-other
I have the following pd.DataFrame match_id player_id round points A B C D E 5890 3750 1 10 0 0 0 3 1 5890 3750 2 10 0 0 0 1 0 5890 3750 3 10 0 8 0 0 1 5890 2366 1 9 0 0 0 5 0 5890 2366 2 9 0 0 0 5 0 5890 2366 3 9 0 0 0 2 0 I want to subtract the values of A, B, C, D and E of the two players and create two new columns ...
Use GroupBy.first with GroupBy.last first, subtract necessary columns with add points columns in concat: g = df.groupby(['match_id','round']) df1 = g.first() df2 = g.last() cols = ['A','B','C','D','E'] out = pd.concat([df1['points'].rename('points_home'), df2['points'].rename('points_away'), df1[cols].sub(df2[cols])], ...
2
3
79,363,421
2025-1-17
https://stackoverflow.com/questions/79363421/how-to-simplify-a-linear-system-of-equations-by-eliminating-intermediate-variabl
I have a linear system shown in the block diagram below. This system is described with the following set of linear equations: err = inp - fb out = a * err fb = f * out I would like to use sympy to compute the output (out) of the function of the input (inp). Thus, I would like to eliminate the variables err and fb. I ...
I think you should specify all "outgoing" parameters, e.g., [out, fb, err], rather than [out] only, since [inp, a, f] could be treated as "constants" in this system of equations. from sympy import simplify, symbols, Eq, solve inp, err, out, fb, a, f = symbols("inp, err, out, fb, a, f") eqns = ( Eq(inp - fb, err), Eq(a ...
4
4
79,360,591
2025-1-16
https://stackoverflow.com/questions/79360591/ssl-certificate-verification-failed-with-sendgrid-send
I am getting an SSL verification failed error when trying to send emails with the Sendgrid web api. I'm not even sure what cert it is trying to verify here. I have done all of the user and domain verification on my Sendgrid account and I am using very straightforward sending process. Here the error urllib.error.URLErro...
Here is a link to the answer. It is not sendgrid specific but Python version specific. I was able to use the link provided in this answer to find the script that I needed to run to update the SSL certs for python. urllib and "SSL: CERTIFICATE_VERIFY_FAILED" Error
2
1
79,363,266
2025-1-16
https://stackoverflow.com/questions/79363266/how-can-i-write-zeros-to-a-2d-numpy-array-by-both-row-and-column-indices
I have a large (90k x 90k) numpy ndarray and I need to zero out a block of it. I have a list of about 30k indices that indicate which rows and columns need to be zero. The indices aren't necessarily contiguous, so a[min:max, min:max] style slicing isn't possible. As a toy example, I can start with a 2D array of non-zer...
Based on hpaulj's comment, I came up with this, which works perfectly on the toy example. a[np.ix_(indices, indices)] = 0.0 print(a) [[1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 0. 0. 1. 0. 1. 1.] [1. 1. 0. 0. 1. 0. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 0. 0. 1. 0. 1. 1.]] It also worked beautifully o...
2
3
79,362,636
2025-1-16
https://stackoverflow.com/questions/79362636/how-to-unjoin-of-different-plots-when-plotting-multiple-scatter-line-plots-on-on
I am plotting multiple line and scatter graphs on a single figure and axis. My code sets one variable called total_steel_area and then goes through a set of values of another variable called phi_x__h. It then calculates x and y values from these variables and places them in a list. It then plots the values. The code th...
You should initialize/reset your lists at each step of the outer loop: for total_steel_area in np.arange(0.01,0.06,0.01): phi_N_bh = [] phi_M_bh2 = [] for phi_x__h in np.arange(0.1,2,0.1): phi_N__bh, phi_M__bh2 = calculate_phi_N__bh_and_phi_M__bh2(phi_x__h, lamb, alpha_cc, eta, f_ck, E_s, varepsilon_cu2, phi_d__h, phi_...
2
0
79,362,404
2025-1-16
https://stackoverflow.com/questions/79362404/pandas-change-values-of-a-dataframe-based-on-an-override
I have a pandas dataframe which looks something like this. orig | dest | type | class | BKT | BKT_order | value | fc_Cap | sc_Cap -----+-------+-------+-------+--------+-----------+---------+--------+--------- AMD | TRY | SA | fc | MA | 1 | 12.04 | 20 | 50 AMD | TRY | SA | fc | TY | 2 | 11.5 | 20 | 50 AMD | TRY | SA | ...
That's a fairly complex task. The individual steps are straightforward though. You need: indexing lookup to find the cap values based on class merge to match the max_BKT mask+groupby.transform to identify the rows to mask idx, cols = pd.factorize(df['class']+'_Cap') group = ['orig', 'dest', 'type'] out = ( df.merge(o...
2
2
79,362,308
2025-1-16
https://stackoverflow.com/questions/79362308/how-to-use-skimage-to-denoise-2d-array-with-nan-values
I'm trying to apply the TV filter to 2D array which includes many nan values: from skimage.restoration import denoise_tv_chambolle import numpy as np data_random = np.random.random ([100,100])*100 plt.imshow(data_random) plt.imshow(denoise_tv_chambolle(data_random)) data_random[20:30, 50:60] = np.nan data_random[30:40,...
You can use a masked array: m = np.isnan(data_random) data = np.ma.masked_array(np.where(m, 0, data_random), m) plt.imshow(denoise_tv_chambolle(data, weight=50)) Example output (with weight = 50): For less artifacts you could fill the holes with the average instead of zero: m = np.isnan(data_random) data = np.ma.mask...
3
6
79,357,401
2025-1-15
https://stackoverflow.com/questions/79357401/why-is-black-hole-null-geodesic-not-printing-zero-are-my-trajectories-correct
Using matplotlib, I am path tracing some 2D photons bending due to a non-rotating standard black hole defined by Schwarzschild metric. I set my initial velocities in terms of r (radius), phi (angle), and t (time) with respect to the affine parameter lambda and then iteratively update the space-time vector based on the ...
You are missing a factor of 2 in the expression for the acceleration component dv_p. Change that line to dv_p = - 2 * Γ_p_rp * v_r * v_p This is because Γ_p_pr = Γ_p_rp and you need to include that term as well in the summation. Neil Butcher's post also picks out another non-zero Christoffel symbol Γ_t_rt that I misse...
10
1
79,362,317
2025-1-16
https://stackoverflow.com/questions/79362317/text-representation-of-a-list-with-gaps
I have a list of integers that is sorted and contains no duplicates: mylist = [2, 5,6,7, 11,12, 19,20,21,22, 37,38, 40] I want a summarized text representation that shows groups of adjacent integers in a compressed form as a hyphenated pair. To be specific: Adjacent implies magnitude differing by 1. So an integer i is...
You can try this: mylist = [2, 5, 6, 7, 11, 12, 19, 20, 21, 22, 37, 38, 40] ans = [] ret = '' # assuming mylist is sorted for i in mylist: if len(ans) == 0 or ans[-1][1] < i - 1: #if the array is empty or we can't add current value to the last range ans.append([i, i]) # make a new range else: ans[-1][1] = i # add to la...
1
0
79,361,494
2025-1-16
https://stackoverflow.com/questions/79361494/creating-reusable-and-composable-filters-for-pandas-dataframes
I am working with multiple Pandas DataFrames with a similar structure and would like to create reusable filters that I can define once and then apply or combine as needed. The only working solution I came up with so far feels clunky to me and makes it hard to combine filters with OR: import pandas as pd df = pd.DataFra...
You can use the .eval() method, which allows for the evaluation of a string describing operations on dataframe columns: Evaluate these string expressions on the dataframe df. Combine the results of these evaluations using the bitwise AND operator (&), which performs element-wise logical AND operation. Use the .loc a...
3
2
79,359,213
2025-1-15
https://stackoverflow.com/questions/79359213/efficient-parsing-and-processing-of-millions-of-json-objects-in-python
I have some working code that I need to improve the run time on dramatically and I am pretty lost. Essentially, I will get zip folders containing tens of thousands of json files, each containing roughly 1,000 json messages. There are about 15 different types of json objects interspersed in each of these files and some ...
This might improve performance, but whether significantly enough is still an open question: The parsing of JSON data is a CPU-bound task and by concurrently doing this parsing in a thread pool will not buy you anything unless orjson is implemented in C (probably) and releases the GIL (very questionable; see this). The ...
2
1
79,361,674
2025-1-16
https://stackoverflow.com/questions/79361674/subplot-four-pack-under-another-subplot-the-size-of-the-four-pack
I want to make a matplotlib figure that has two components: A 2x2 "four pack" of subplots in the lower half of the figure A subplot above the four pack that is the size of the four pack. I have seen this answer where subplots can have different dimensions. How can that approach be tweaked when there are multiple co...
You can use plt.subplot_mosaic or GridSpec. Someone else can write an answer using GridSpec, but here is how you'd do it using subplot_mosaic. For the large plot on top and 4 below it: import matplotlib.pyplot as plt fig, axs_dict = plt.subplot_mosaic("AA;AA;BC;DE") If you want to put the large plot on the left and th...
2
6
79,359,767
2025-1-15
https://stackoverflow.com/questions/79359767/implementation-of-f1-score-iou-and-dice-score
This paper proposes a medical image segmentation hybrid CNN - Transformer model for segmenting organs and lesions in medical images simultaneously. Their model has two output branches, one to output organ mask, and the other to output lesion mask. Now they describe the testing process as follows: In order to compare t...
Disclaimers: My answer is a mix of code reading and "educated guessing". I did not run the actual code, but a run with the help of a debugger should help you verify/falsify my assumptions. The code shared below is a condensed version of the relevant section of the score/metrics calculations, to help focus on the essen...
3
1
79,360,047
2025-1-16
https://stackoverflow.com/questions/79360047/issue-with-django-checkconstraint
I'm trying to add some new fields to an existing model and also a constraint related to those new fields: class User(models.Model): username = models.CharField(max_length=32) # New fields ################################## has_garden = models.BooleanField(default=False) garden_description = models.CharField( max_length...
Far as I can see your constraint is trying to make sure that all instances of user have has_garden=True, which would cause a violation if a user has has_garden=False. Here we add a constraint to either check if has_garden is true and garden_description is not null, or if has_garden is false and garden_descriptionn is n...
1
2
79,360,975
2025-1-16
https://stackoverflow.com/questions/79360975/how-to-color-nodes-in-network-graph-based-on-categories-in-networkx-python
I am trying to create a network graph on correlation data and would like to color the nodes based on categories. Data sample view: Data: import pandas as pd links_data = pd.read_csv("https://raw.githubusercontent.com/johnsnow09/network_graph/refs/heads/main/links_filtered.csv") graph code: import networkx as nx G = n...
Since you have a unique relationship from var1 to Category, you could build a list of colors for all the nodes using: import matplotlib as mpl cmap = mpl.colormaps['Set3'].colors # this has 12 colors for 11 categories cat_colors = dict(zip(links_data['Category'].unique(), cmap)) colors = (links_data .drop_duplicates('v...
2
0
79,360,171
2025-1-16
https://stackoverflow.com/questions/79360171/is-there-a-better-way-to-use-zip-with-an-arbitrary-number-of-iters
With a set of data from an arbitrary set of lists (or dicts or other iter), I want to create a new list or tuple that has all the first entries, then all the 2nd, and so on, like an hstack. If I have a known set of data, I can zip them together like this: data = {'2015': [2, 1, 4, 3, 2, 4], '2016': [5, 3, 3, 2, 4, 6], ...
While @Tinyfold's answer works, it should be pointed out that using sum to concatenate a series of iterables is rather inefficient as noted in the documentation: For some use cases, there are good alternatives to sum(). The preferred, fast way to concatenate a sequence of strings is by calling ''.join(sequence). To ad...
2
2
79,360,261
2025-1-16
https://stackoverflow.com/questions/79360261/why-does-list-call-len
The setup code: class MyContainer: def __init__(self): self.stuff = [1, 2, 3] def __iter__(self): print("__iter__") return iter(self.stuff) def __len__(self): print("__len__") return len(self.stuff) mc = MyContainer() Now, in my shell: >>> i = iter(mc) __iter__ >>> [x for x in i] [1, 2, 3] >>> list(mc) __iter__ __len_...
The behavior of calling __len__ of the given iterable during initialization of a new list is an implementation detail and is meant to help pre-allocate memory according to the estimated size of the result list, as opposed to naively and inefficiently grow the list as it is iteratively extended with items produced by a ...
7
15
79,358,709
2025-1-15
https://stackoverflow.com/questions/79358709/parallelisation-optimisation-of-integrals-in-python
I need to compute a large set of INDEPENDENT integrals and I have currently written the following code for it in Python: # We define the integrand def integrand(tau,k,t,z,config, epsilon=1e-7): u = np.sqrt(np.maximum(epsilon,tau**2 - z**2)) return np.sin(config.omega * (t - tau))*k/2, np.sin(config.omega * (t - tau)) *...
A significant fraction of the time is spent in the Bessel function J1 and this function is relatively well optimized when numba-scipy is installed and the integrand function is compiled with numba (i.e. no need to call it from the slow CPython interpreter). That being said, in practice, this function is implemented in ...
2
2
79,359,839
2025-1-15
https://stackoverflow.com/questions/79359839/return-a-different-class-based-on-an-optional-flag-in-the-arguments-without-fact
I am implementing a series of classes in Equinox to enable taking derivatives with respect to the class parameters. Most of the time, the user will be instantiating class A and using the fn function to generate some data, the details of which are unimportant. However, in cases where we are interested in gradients, it i...
If you were using a normal class, what you did is perfectly reasonable: class A_abstract: pass class A_sigmoid(A_abstract): pass class A(A_abstract): def __new__(cls, flag, **kwds): if flag: instance = A_sigmoid.__new__(A_sigmoid) else: instance = super().__new__(cls) instance.__init__(**kwds) return instance print(typ...
2
0
79,359,697
2025-1-15
https://stackoverflow.com/questions/79359697/why-does-python-tuple-unpacking-work-on-sets
Sets don't have a deterministic order in Python. Why then can you do tuple unpacking on a set in Python? To demonstrate the problem, take the following in CPython 3.10.12: a, b = {"foo", "bar"} # sets `a = "bar"`, `b = "foo"` a, b = {"foo", "baz"} # sets `a = "foo"`, `b = "baz"` I recognize that the literal answer is ...
The core "why" is: Because all features start at -100 points, and nobody thought it was worth preventing sets from being used in this context. Every new feature costs developer resources to write it, write tests for it, code review it, and then maintain it forever. There has to be a significant benefit to the feature t...
1
3
79,359,204
2025-1-15
https://stackoverflow.com/questions/79359204/specify-the-position-of-the-legend-title-variable-name-with-legend-on-top-with
When I move the legend of a Seaborn plot from its default position to the top of the plot, I want to have the variable (hue) name in the same row as the possible variable values. Starting from this plot: import seaborn as sns penguins = sns.load_dataset("penguins") ax = sns.relplot(penguins, x="bill_length_mm",y="flipp...
You could retrieve the legend, get its bbox and move the title down/left. Here I used half of the width of the bbox to the left and a fixed extra left and down: import seaborn as sns penguins = sns.load_dataset('penguins') g = sns.relplot(penguins, x='bill_length_mm', y='flipper_length_mm', hue='species') sns.move_lege...
2
1
79,358,829
2025-1-15
https://stackoverflow.com/questions/79358829/how-can-i-make-ruff-check-assume-a-specific-python-version-for-allowed-syntax
I am on Linux, created a Python 3.9 venv, installed ruff in the venv, wrote this code: def process_data(data: list[int]) -> str: match data: case []: return "No data" case [first, *_] if (average := lambda: sum(data) / len(data)) and average() > 50: return f"Data average is high: {average():.2f}, starting with {first}"...
Ruff doesn't support that (yet?). There's an open issue for precisely this problem. Our parser doesn't take into account the Python version aka target-version setting while parsing the source code. This means that we would allow having a match statement when the target Python version is 3.9 or lower. We want to signal...
3
3
79,358,567
2025-1-15
https://stackoverflow.com/questions/79358567/polars-replace-letter-in-string-with-uppercase-letter
Is there any way in polars to replace character just after the _ with uppercase using regex replace? So far I have achieved it using polars.Expr.map_elements. Is there any alternative using native expression API? import re import polars as pl # Initialize df = pl.DataFrame( { "id": [ "accessible_bidding_strategy.id", "...
I don't think it's possible to "dynamically" modify the replacement in any of the Polars replace methods. You could create all possible mappings and use .str.replace_many() import string pl.Config(fmt_table_cell_list_len=10, fmt_str_lengths=120) df.with_columns( pl.col("id").str.replace_many( [f"_{c}" for c in string.a...
2
3
79,358,141
2025-1-15
https://stackoverflow.com/questions/79358141/create-row-for-each-data-in-list-python
I'm trying to create a code to show me some stock stats. For that I need to iterate through a list of stocks in python and for each one show some details. So far I have this code: import yfinance as yf import pandas as pd tickerlist = ['AAPL', 'MSFT', 'AMZN'] stock_data = [] for stock in tickerlist: info = yf.Ticker(st...
Instead of adding each piece of info you need as a separate element to stock_data, you should have a list of iterables, where each element of the list contains all of the relevant data for that stock. Note that you can also explicitly provide the column names to get a "nicer" output: mport yfinance as yf import pandas ...
3
3
79,392,937
2025-1-28
https://stackoverflow.com/questions/79392937/django-5-1-postgresql-debian-server
Trying to connect to posgresql base as Django wrote in its docs: https://docs.djangoproject.com/en/5.1/ref/databases/#postgresql-notes DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "OPTIONS": { "service": "my_service", "passfile": ".my_pgpass", }, } } I've created 2 files in my home directory ....
Cant use 'service': 'my_service', 'passfile': '.pgpass', So use decouple and tradional DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'dbms_db', 'USER': 'dbms', 'PASSWORD': DB_PASS, 'HOST': '127.0.0.1', 'PORT': '5432', } }
1
0
79,394,429
2025-1-28
https://stackoverflow.com/questions/79394429/performance-and-memory-duplication-when-using-multiprocessing-with-a-multiple-ar
I am having difficulties understanding logic of optimizing multip-rocessing in Python 3.11. Context: I am running on Ubuntu 22.04, x86 12 cores / 32 threads, 128 GB memory Concerns: (Please refer to code and result below). Both multiprocessing function using a local df (using map+partial or starmap) take a lot more tim...
When you fork a child process (in this case you are forking multiprocessing.count() processes) it is true that the child process inherits the memory of the forking process. But copy-on-write semantics is used such that when that inherited memory is modified, it is first copied resulting in increased memory utilization....
1
1
79,394,750
2025-1-28
https://stackoverflow.com/questions/79394750/how-to-enable-seperate-action-for-double-click-and-drag-and-drop-for-python-file
I have a simple python script and I want to enable dragging and dropping files into the python script in Windows 11. This works. But only if I set python launcher as default application for python file types. I want to launch my editor when double clicking. My idea is to create a batch file as default application for t...
code.exe has been moved out of the bin folder. Inside the bin folder is only code and code.cmd. It is now one folder up. Running code.exe directly solves the problem. No window is created. Here is C:\drag\python_drag.cmd: @echo off if not "%~2"=="" ( rem Arguments detected, so files were dragged on it. Calling it with ...
1
1
79,394,708
2025-1-28
https://stackoverflow.com/questions/79394708/how-to-convert-an-imagehash-to-a-numpy-array
I use python / imagehash.phash to search for photos similar to a given one, i.e. photos so that the hamming distance is zero. Now I also would like to detect photos that are flipped or rotated copies - the hamming distance is > 0. Instead of flipping or rotating the photo and then calculating an new pHash I would like ...
From reading the source code, I found that you can convert between an imagehash.ImageHash object and a binary array using .hash and the imagehash.ImageHash() constructor. from PIL import Image import imagehash img = Image.open("house.jpg") image_hash_obj = imagehash.phash(img) print(image_hash_obj) hash_array = image_h...
1
1
79,393,979
2025-1-28
https://stackoverflow.com/questions/79393979/distribute-value-to-fill-and-unfill-based-on-a-given-condition
The problem: I want to distribute a value that can be positive or negative from one row into multiple rows, where each row can only contain a specific amount, if the value to distribute is positive it fills rows, if it is negative it "unfills". It is possible to fill/unfill partially a row and the order of filling do m...
For each group defined by acc, we can compute the total value as the sum of all payments (sum column in right) and the new payment (val column in left) in the group. Given this total amount, we can distribute it across rows in the group. Particularly, we take the total value and subtract the amount that could've alread...
2
1
79,392,129
2025-1-27
https://stackoverflow.com/questions/79392129/how-to-make-an-asynchronous-python-call-from-within-scala
I am trying to access the Python Client V4 for DyDx from within a scala project. I integrated the previous V3 using scalapy library. But V4 contains asynchronous calls which I don't know how I should handle. So for example the short term order composite example's test function begins with the following lines: node = aw...
A solution I found is to use the asyncio.run function which can be used to run coroutine: val asyncio = py.module("asyncio") val node = asyncio.run(client.NodeClient.connect(network.TESTNET.node))
3
0
79,393,859
2025-1-28
https://stackoverflow.com/questions/79393859/get-pathlib-path-with-symlink
Let's say I open a python console inside /home/me/symlink/dir, which is actually a shortcut to /home/me/path/to/the/dir. In this console I execute the following code: from pathlib import Path path = Path.cwd() print(path) # "/home/me/path/to/the/dir" So cwd() actually resolves the symlinks and get the absolute path au...
you'll need to use os.getcwd() . Path.cwd() internally uses os.getcwdb() which resolves symlinks, while os.getcwd() preserves them. from pathlib import Path import os path = Path(os.getcwd()) print(path) If you need work with symlinks in general. path.is_symlink() - check if a path is a symlink path.resolve() - resolv...
2
2
79,391,105
2025-1-27
https://stackoverflow.com/questions/79391105/calculate-a-monthly-minimum-mean-and-maximum-on-daily-temperature-data-for-febr
I'm fairly new to Python and I'm trying to calculate the minimum, average and maximum monthly temperature from daily data for February. I'm having a bit of trouble applying my code from other months to February. Here is my code for the 31-day months : import xarray as xr import numpy as np import copernicusmarine DS = ...
As I said in my comments, the problems in your different attempts come from the indexes you use for var_arr. In the 1st case, with 2 different ind_time_.. indexes, the data is superposed at the start of var_arr, like in the following figure; this both causes lost data and many zeroes left at the end of the array, which...
1
1
79,393,186
2025-1-28
https://stackoverflow.com/questions/79393186/convolving-with-a-gaussian-kernel-vs-gaussian-blur
While looking for a way to generate spatially varying noise, I came across this answer, which is able to do what I wanted. But I am getting confused about how the code works. From what I understand, the first step of the code generates a gaussian kernel: import numpy as np import scipy.signal import matplotlib.pyplot a...
I've played a bit with the code you provided, and this seems to be related to the fact that you use standard deviations for your gaussian kernel that are very different, your correlation_scale is 150 in the first example whereas sigma is one in your second example. If I take similar values for both, I get similar resul...
1
1
79,393,295
2025-1-28
https://stackoverflow.com/questions/79393295/filter-dataframe-by-nearest-date
I am trying to filter my Polars DataFrame for dates that are nearest to a given date. For example: import polars import datetime data = { "date": ["2025-01-01", "2025-01-01", "2025-01-01", "2026-01-01"], "value": [1, 2, 3, 4], } df = polars.DataFrame(data).with_columns([polars.col("date").cast(polars.Date)]) shape: (4...
You don't need to add the temporary column, just filter directly: df.filter((m:=(pl.col('date')-date).abs()).min() == m) Or, without the walrus operator: diff = (pl.col('date')-date).abs() df.filter(diff.min() == diff) Output: ┌────────────┬───────┐ │ date ┆ value │ │ --- ┆ --- │ │ date ┆ i64 │ ╞════════════╪═══════╡...
4
3
79,393,122
2025-1-28
https://stackoverflow.com/questions/79393122/transforming-data-with-implicit-categories-in-header-with-pandas
I have a table like: | 2022 | 2022 | 2021 | 2021 class | A | B | A | B -----------|------|------|------|------ X | 1 | 2 | 3 | 4 Y | 5 | 6 | 7 | 8 How can I transform it to following form? year | category | class | value ---------------------------------- 2022 | A | X | 1 2022 | A | Y | 5 2022 | B | X | 2 2022 | B | ...
You could melt with ignore_index=False and rename_axis/rename: out = (df.rename_axis(columns=['year', 'category']) .melt(ignore_index=False) .reset_index() ) Or: out = (df.melt(ignore_index=False) .rename(columns={'variable_0': 'year', 'variable_1': 'category'}) .reset_index() ) Output: class year category value 0 X...
2
2
79,392,980
2025-1-28
https://stackoverflow.com/questions/79392980/is-there-a-method-to-check-generic-type-on-instancing-time-in-pyhon
I suppose how it can be check what type specified on Generic types in __init__ ? On python 3.10, it could not be. On first, I found this page: Instantiate a type that is a TypeVar and make some utility for get __orig_class__. And test on instancing time, there is no __orig_class__ attribute in that timing on self. Then...
When you write something like: from typing import Generic, TypeVar T = TypeVar('T') class MyGeneric(Generic[T]): def __init__(self): print(getattr(self, '__orig_class__', None)) x = MyGeneric[int]() You may see x.orig_class appear on some Python versions and not on others. That is an internal artifact of how the CPyth...
1
1
79,391,458
2025-1-27
https://stackoverflow.com/questions/79391458/surprising-lack-of-speedup-in-caching-numpy-calculations
I need to do a lot of calculations on numpy arrays, with some of the calculations being repeated. I had the idea of caching the results, but observe that In most cases, the cached version is slower than just carrying out all calculations. Not only is the cached version slower, line profiling also indicates that the ab...
TL;DR: page faults explain why the cache-based version is significantly slower than the one without a cache when num_iter is small. This is a side effect of creating many new Numpy arrays and deleted only at the end. When num_iter is big, the cache becomes more effective (as explained in the JonSG's answer). Using anot...
10
10
79,392,330
2025-1-27
https://stackoverflow.com/questions/79392330/is-there-an-add-element-to-a-set-method-returning-whether-the-element-was-actual
I would like to achieve in Python the following semantics which is present in Kotlin: myset = set() for elem in arr: if not myset.add(elem): return 'duplicate found' I am looking for a way to get a slight performance boost by 'hijacking' add-to-set operation by analysing the returned value. It should be better if comp...
No, and it has been rejected.
1
1
79,392,068
2025-1-27
https://stackoverflow.com/questions/79392068/multiple-facet-plots-with-python
I have data from different locations and the following code to plot rainfall and river discharge for each location separately. import pandas as pd import matplotlib.pyplot as plt def hydrograph_plot(dates, rain, river_flow): # figure and subplots fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10, 6), sharex=True, gridsp...
You can use subplots2grid import pandas as pd import matplotlib.pyplot as plt def hydrograph_plot(dates, rain, river_flow, posx, posy): # figure and subplots ax1 = plt.subplot2grid((6,2), (posy*3, posx), colspan=1, rowspan=1) ax2 = plt.subplot2grid((6,2), (posy*3+1, posx), colspan=1, rowspan=2, sharex=ax1) # bar graph ...
2
1
79,387,290
2025-1-25
https://stackoverflow.com/questions/79387290/get-click-to-not-expand-variables-in-argument
I have a simple Click app like this: import click @click.command() @click.argument('message') def main(message: str): click.echo(message) if __name__ == '__main__': main() When you pass an environment variable in the argument, it expands it: ➜ Desktop python foo.py '$M0/.viola/2025-01-25-17-20-23-307878' M:/home/ramra...
The solution is to pass windows_expand_args=False when calling the main command.
3
1
79,392,038
2025-1-27
https://stackoverflow.com/questions/79392038/pandas-batch-update-account-string
My organization has account numbers that are comprised of combining multiple fields. The last field is always 4 characters (typically 0000) Org Account 01 01-123-0000 01 01-456-0000 02 02-789-0000 02 02-456-0000 03 03-987-0000 03 03-123-1234 I also have a dictionary mapping of how many characters the last component sh...
Since working with strings is hardly vectorizable, just use a simple python function and a list comprehension: def replace(org, account): a, b = account.rsplit('-', maxsplit=1) if org == '03': return f'{a}-{D03_SPECIAL_MAP[b]}' return f'{a}-{b[:MAP[org]]}' df['Account'] = [replace(o, a) for o, a in zip(df['Org'], df['A...
1
2
79,389,136
2025-1-26
https://stackoverflow.com/questions/79389136/how-to-create-a-python-module-in-c-that-multiprocessing-does-not-support
I am trying and failing to reproduce and understand a problem I saw where multiprocessing failed when using a python module written in C++. My understanding was that the problem is that multiprocessing needs to pickle the function it is using. So I made my_module.cpp as follows: #include <pybind11/pybind11.h> int add(i...
I don't think your code tries to pickle a module as-is? If you redefine parallel_add to take a module as an argument, then use a partial to pass my_module into it, you can force Python to do that. import my_module from functools import partial from multiprocessing import Pool # Same wrapper, but now takes a module as a...
1
3
79,381,851
2025-1-23
https://stackoverflow.com/questions/79381851/whats-the-fastest-way-of-skipping-tuples-with-a-certain-structure-in-a-itertool
I have to process a huge number of tuples made by k integers, each ranging from 1 to Max_k. Each Max can be different. I need to skip the tuples where an element has reached is max value, in that case keeping only the tuple with "1" in the remaining position. The max is enforced by design, so it cannot be that some ite...
Since you commented that order between tuples isn't important, we can simply produce the tuples with max value and then the tuples without max value: from itertools import * def tuples_direct(max_list): n = len(max_list) # Special case if 1 in max_list: yield (1,) * n return # Tuples with a max. for i, m in enumerate(m...
2
0
79,389,115
2025-1-26
https://stackoverflow.com/questions/79389115/how-to-map-values-from-a-3d-tensor-to-a-1d-tensor-in-pytorch
I'm stuck with a Pytorch problem and could use some help: I've got two tensors: A 3D tensor (shape: i, j, j) with integer values from 0 to n A 1D tensor (shape: n) I need to create a new tensor that's the same shape as the first one (i, j, j), but where each value is replaced by the corresponding value from the secon...
This should work: small_tensor[big_tensor] Take note that the type of the big_tensor must be long/int. Edit: In response to the comment of @simon, I wrote a colab notebook that shows how this solution works without the need to perform any other operation.
2
2
79,390,708
2025-1-27
https://stackoverflow.com/questions/79390708/understanding-type-variance-in-python-protocols-with-generic-types
I'm trying to understand how type variance works with Python protocols and generics. My test cases seem to contradict what I expect regarding invariant, covariant, and contravariant behavior. Here's a minimal example demonstrating the issue: from typing import TypeVar, Protocol # Type variables T = TypeVar('T') T_co = ...
feeder3 works because Python's structural typing checks method return type covariance (accepting Dog as Animal), but incorrectly ignores parameter contravariance (should reject Dog input for Animal parameter). This is a type checker limitation (PyCharm/mypy may differ). walker2 fails due to insufficient contravarianc...
2
4
79,386,410
2025-1-25
https://stackoverflow.com/questions/79386410/scrapy-bench-errors-with-assertionerror-on-execution
I ran this command to install conda install -c conda-forge scrapy pylint autopep8 -y then I ran scrapy bench to get the below error. The same thing is happening on global installation via pip command. Please help as I can't understand the reason for this error scrapy bench 2025-01-25 13:52:30 [scrapy.utils.log] INFO: S...
This is a bug on Scrapy introduced on 2.12.0. It's passing the wrong param to isinstance(). This function expects the first param to be the object to be verified (see the docs), but it's currently passing Response class, which leads to the AssertionError we can see in your logs: File "C:\Users\Risha\anaconda3\envs\scr...
2
3
79,385,456
2025-1-24
https://stackoverflow.com/questions/79385456/create-background-for-a-3d-and-2d-plots
Can someone help me create a layout for plots with the following structure: A CD A/B EF B GH where A,B are 3D plots (fig.add_subplot(241, projection='3d')) and C,D,E,F,G,H are regular 2D plots. A/B represents a shared space where half of plot A and half of plot B appear.
You could use a subplot_mosaic: f, grid = plt.subplot_mosaic('''\ ACD ACD AEF BEF BGH BGH''', per_subplot_kw={('A', 'B'): {'projection': '3d'}} ) plt.tight_layout() Then plot on grid['A']/grid['B']/grid['C']/... Output: If you need a more flexible, play with the gridspec: f, grid = plt.subplot_mosaic('''\ ACD ACD AEF...
1
3
79,389,718
2025-1-27
https://stackoverflow.com/questions/79389718/python-queue-not-updated-outside-of-thread
I've created a Flask app that retrieves data from a queue that is updated in a separate thread. I'm not sure why the Queue is empty when I retrieve it from the Flask GET endpoint, and am a bit ignorant of what Queues being thread-safe is supposed to mean, since my example doesn't appear to reflect that. In the example ...
Queue is thread-safe but not process-shared. Could you please show your uvicorn command for running server? Also, I see you using debug=True. This command involves reloading, which can create two processes. I could suggest you: Use debug=False: app.run(host='0.0.0.0', port=PORT_UI, debug=False) Confirming it’s a si...
2
2
79,387,822
2025-1-26
https://stackoverflow.com/questions/79387822/how-to-set-major-x-ticks-in-matplotlib-boxplot-time-series
I'm working on creating a time series box plot that spans multiple years, and I want the x-axis to display only the month and year for the 1st of each month. I would like to limit the labels to every month, or maybe every couple of months. import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import...
You were almost there, just need one more instruction: import matplotlib.dates as mdates mdates.set_epoch('2020-01-01T00:00:00') It is because in Matplotlib the 0 date is the 01/01/1970. This instruction has to be put at the start of your code and if you are in Jupyter you will need to restart your kernel.
2
2
79,387,857
2025-1-26
https://stackoverflow.com/questions/79387857/django-5-1-usercreationform-wont-allow-empty-passwords
I'm upgrading a Django 3.0 app to 5.1 and have been moving slowly through each minor release. So far so good. However, once I went from Django 5.0 to 5.1, I saw changed behavior with my "Create New User" page which uses a UserCreationForm form that allows empty passwords. If no password is supplied, a random one is gen...
Yes. In new versions of Django, the source code has changed and the behaviour of the BaseUserCreationForm class has changed accordingly. The password1 and password2 fields are now created using the static method SetPasswordMixin.create_password_fields(), and they default to required=False. This can be easily checked he...
1
2
79,388,211
2025-1-26
https://stackoverflow.com/questions/79388211/pip-install-fails-looking-into-var-private-folders
pip install . fails, while python3 ./setup.py install succeeds. I am trying to compile and install a module that is purely local for now, written in C++. The name of the module is tcore. The module uses python C API and numpy C API. pip install complains it cannot find numpy which is imported by my setup.py, however th...
Modern pip uses build isolation, it uses a transient virtual env to build a wheel and then installs the wheel into the target environment; this transient virtual env is your absent temporary directory; pip removes it after success or failure so you cannot find it. There are two ways to work around the problem: Install...
2
1
79,387,073
2025-1-25
https://stackoverflow.com/questions/79387073/unionlistnode-none-vs-optionallistnode-vs-optionallistnode
It seems we can use the following types for hinting LinkedLists in Python: Union[ListNode, None] or Union['ListNode', None] Optional[ListNode] Optional['ListNode'] ListNode | None ... Which of these should we prefer? Feel free to share any other relevant insights as well. Attempt Here we simply want to merge K sorted...
typing.Optional[T] is just shorthand for typing.Union[T, None]. Would always prefer the former to the latter for its succinctness. Union is of course still useful when using it with something else than None. After Python 3.10, a union of types can simply be written with the | operator (Optional[T] becomes T | None). So...
3
5
79,387,911
2025-1-26
https://stackoverflow.com/questions/79387911/how-to-make-values-into-rows-instead-of-columns-when-using-pivot-table-in-pandas
Say I have this data frame: import pandas as pd x = pd.DataFrame([[1, 'step', 'id', 22, 33], [2, 'step', 'id', 55, 66]], columns=['time', 'head_1', 'head_2', 'value_1', 'value_2']) print(x) time head_1 head_2 value_1 value_2 0 1 step id 22 33 1 2 step id 55 66 Then I use pivot table like below print(x.pivot_table(valu...
One straightforward way is to melt your dataframe so value_1 and value_2 become labels in single column and then pivot on that. Like: import pandas as pd x = pd.DataFrame( [ [1, 'step', 'id', 22, 33], [2, 'step', 'id', 55, 66] ], columns=['time','head_1', 'head_2', 'value_1', 'value_2'] ) melted = x.melt( id_vars=['tim...
1
3
79,387,755
2025-1-26
https://stackoverflow.com/questions/79387755/how-to-resolve-incompatible-types-in-assignment-when-converting-dictionary-val
I am working with a dictionary in Python where a key ("expiryTime") initially holds a str value in ISO 8601 format (e.g., "2025-01-23T12:34:56"). At some point in my code, I convert this string into a datetime object using datetime.strptime. However, I encounter a mypy error during type checking: error: Incompatible ty...
For your options... You probably don't want to suppress the error, it's there for a reason :) The Union type doesn't enforce that your dictionary is all of one type or another, meaning it is hard to consume that variable and having to add handling for both types [I might need more context on your mocking to fully unde...
2
3
79,387,246
2025-1-25
https://stackoverflow.com/questions/79387246/how-to-loop-through-all-distinct-triplets-of-an-array-such-that-they-are-of-the
As stated above, I need to efficiently count the number of distinct triplets of the form (a, b, b). In addition, the triplet is only valid if and only if it can be formed by deleting some integers from the array, only leaving behind that triplet in that specific ordering. What this is saying is that the triplets need t...
At the second-to-last occurrence of each b-value, add the number of different values that came before it. Takes about 1.5 seconds for array length 10^6. from collections import Counter def linear(arr): ctr = Counter(arr) A = set() result = 0 for b in arr: if ctr[b] == 2: result += len(A) - (b in A) ctr[b] -= 1 A.add(b)...
3
0
79,386,829
2025-1-25
https://stackoverflow.com/questions/79386829/how-to-provide-tomllib
Since python3.11 we are able to use builtin library tomllib before we had access to third party library tomli and a few others. I did not have analyzed both packages deeply but came to the conclusion, that I am able to replace tomli with tomllib for my purposes. My issue I have with the situation: How to handle the cha...
You can use PEP 496 – Environment Markers and PEP 508 – Dependency specification for Python Software Packages; they're usable in setup.py, setup.cfg, pyproject.toml, requirements.txt. In particular see PEP 631 – Dependency specification in pyproject.toml for pyproject.toml: [project] dependencies = [ 'tomli ; python_ve...
2
1
79,379,114
2025-1-22
https://stackoverflow.com/questions/79379114/performance-optimization-for-minimax-algorithm-in-tic-tac-toe-with-variable-boar
I'm implementing a Tic-Tac-Toe game with an AI using the Minimax algorithm (with Alpha-Beta Pruning) to select optimal moves. However, I'm experiencing performance issues when the board size increases or the number of consecutive marks required to win (k) grows. The current implementation works fine for smaller board s...
Minimax, even with alpha-beta pruning, will have to look at an exponentially growing number of states. For larger board sizes this will mean you can only perform shallow searches. I would suggest to switch to the Monte Carlo Search algorithm. It uses random sampling, making decisions whether to explore new branches in ...
2
1
79,386,521
2025-1-25
https://stackoverflow.com/questions/79386521/polars-top-k-by-with-over-k-1-bug
Given the following dataFrame: pl.DataFrame({ 'A': ['a0', 'a0', 'a1', 'a1'], 'B': ['b1', 'b2', 'b1', 'b2'], 'x': [0, 10, 5, 1] }) I want to take value of column B with max value of column x within same value of A (taken from this question). I know there's solution with pl.Expr.get() and pl.Expr.arg_max(), but I wanted...
The error message produced when running your code without the window function gives is a bit more explicit and hints at a solution. df.with_columns( pl.col("B").top_k_by("x", 1) ) InvalidOperationError: Series B, length 1 doesn't match the DataFrame height of 4 If you want expression: col("B").top_k_by([dyn int: 1, co...
3
4
79,383,867
2025-1-24
https://stackoverflow.com/questions/79383867/streaming-multiple-videos-through-fastapi-to-web-browser-causes-http-requests-to
I have a large FastAPI application. There are many different endpoints, including one that is used to proxy video streams. The usage is something like this: the endpoint receives the video stream URL, opens it and returns it through streaming response. If I proxy 5 video streams, then everything is fine. If I proxy 6 s...
As you noted that you are testing the application through a web browser, you should be aware that every browser has a specific limit for parallel connections to a given hostname (as well as in general). That limit is hard coded—in Chrome and FireFox, for instance, that is 6—have a look here. So, if one opened 6 paralle...
1
3
79,385,534
2025-1-24
https://stackoverflow.com/questions/79385534/python-adaptor-for-working-around-relative-imports-in-my-qgis-plugin
I am writing a QGIS plugin. During early development, I wrote and tested the Qt GUI application independently of QGIS. I made use of absolute imports, and everything worked fine. Then, I had to adapt everything to the quirks of QGIS. I can't explain why and haven't been able to find any supporting documentation, but no...
What does your python path look like? For those relative imports to work, you need the directory containing your repository directory (not just the repository directory itself) to be on the python path. The reason the relative imports work when you run the code as a QGIS plugin is that the directory containing your rep...
1
1
79,385,866
2025-1-24
https://stackoverflow.com/questions/79385866/numpy-array-boolean-indexing-to-get-containing-element
Given a (3,2,2) array how do I get second dimension elements given a single value on the third dimension import numpy as np arr = np.array([ [[31., 1.], [41., 1.]], [[63., 1.],[73., 3.]], [[ 95., 1.], [100., 1]] ] ) ref = arr[(arr[:,:,0] > 41.) & (arr[:,:,0] <= 63)] print(ref) Result [[63. 1.]] Expected result [[63.,...
I think you want arr[((arr[:,:,0]>41)&(arr[:,:,0]<=63)).any(axis=1)] and arr[(arr[:,:,0] <= 63).any(axis=1)] Some explanation. First of all, a 3D array, is also a 2D array of 1D array, or a 1D array of 2D array. So, if the expected answer is an array of "whole parent", that is an array of 2D arrays (so a 3D array, wi...
2
2
79,382,645
2025-1-23
https://stackoverflow.com/questions/79382645/fastapi-why-does-synchronous-code-do-not-block-the-event-loop
I’ve been digging into FastAPI’s handling of synchronous and asynchronous endpoints, and I’ve come across a few things that I’m trying to understand more clearly, especially with regards to how blocking operations behave in Python. From what I understand, when a synchronous route (defined with def) is called, FastAPI o...
If the function is truly blocking (e.g., it’s waiting for something like time.sleep()), how is the event loop still able to execute other tasks concurrently? Isn’t the Python interpreter supposed to execute just one thread at a time? Only one thread is indeed executed at a time. The flaw in the quoted question is to ...
3
4
79,385,676
2025-1-24
https://stackoverflow.com/questions/79385676/filter-with-expression-expansion
Is it possible to convert the following filter, which uses two conditions, to something that uses expression expansion or a custom function in order to apply the DRY priciple (avoid the repetition)? Here is the example: import polars as pl df = pl.DataFrame( { "a": [1, 2, 3, 4, 5], "val1": [1, None, 0, 0, None], "val2"...
.any_horizontal() and .all_horizontal() can be used to build | and & chains. .not_() can also be used instead of ~ if you prefer. df.filter( pl.any_horizontal( pl.col("val1", "val2").is_in([None, 0]).not_() ) ) shape: (2, 3) ┌─────┬──────┬──────┐ │ a ┆ val1 ┆ val2 │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════...
2
2
79,382,803
2025-1-23
https://stackoverflow.com/questions/79382803/i-am-trying-to-cause-race-condition-for-demonstration-purposes-but-fail-to-fail
I am actively trying to get race condition and cause problem in calculation for demonstration purposes but i can't achieve such problem simply. My tought process was to create a counter variable, reach it from diffrent threads and async functions (i did not tried mp since it pauses process) and increase it by one. Runn...
The time between fetching the value of counter, incrementing it and reassigning it back is very short. counter = counter + 1 In order to force a race condition for demonstration purposes, you should extend that window perhaps with sleep tmp = counter time.sleep(random.random()) counter = tmp + 1 I would also increase...
3
1
79,383,301
2025-1-24
https://stackoverflow.com/questions/79383301/is-there-a-way-to-use-list-of-indices-to-simultaneously-access-the-modules-of-nn
Is there a way to use list of indices to simultaneously access the modules of nn.ModuleList in python? I am working with pytorch ModuleList as described below, decision_modules = nn.ModuleList([nn.Linear(768, 768) for i in range(10)]) Our input data is of the shape x=torch.rand(32,768). Here 32 is the batch size and 7...
The ind tensor is of size (bs, n_decisions), which means we're choosing a different set of experts for each item in the batch. With this setup, the most efficient way to compute the output is to compute all experts for all batch items, then gather the desired choices after. This will be more performant in GPU compared ...
1
2
79,385,026
2025-1-24
https://stackoverflow.com/questions/79385026/pandas-groupby-with-tag-style-list
I have a dataset with 'tag-like' groupings: Id tags 0 item1 ['friends','family'] 1 item2 ['friends'] 2 item3 [] 3 item4 ['family','holiday'] So a row can belong to several groups. I want to create an object similar to groupby, so that I can use agg etc. df.groupby('tags').count() expected result tags count 0 'frien...
You can't have a row belong to multiple groups like your grpby object. Thus what you want to do is impossible in pure pandas, unless you duplicate the rows with explode, then you will be able to groupby.agg: out = (df.explode('tags') .groupby('tags', as_index=False) .agg(**{'count': ('tags', 'size')}) ) Output: tags ...
2
1
79,384,811
2025-1-24
https://stackoverflow.com/questions/79384811/issues-with-axes-matplotlib-inheritance
I'm trying to mimic the plt.subplots() behavior, but with custom classes. Rather than return Axes from subplots(), I would like to return CustomAxes. I've looked at the source code and don't understand why I am getting the traceback error below. I'm able to accomplish what I want without inheriting from Axes, but I thi...
I think the error is because you are not passing any position information (i.e. args) to Axes.__init__ when you call it via super. However, you can do this more simply without subclassing Figure, since subplots lets you specify an Axes subclass: import matplotlib.pyplot as plt from matplotlib.axes import Axes class Cus...
1
2
79,384,228
2025-1-24
https://stackoverflow.com/questions/79384228/batch-insert-data-using-psycopg2-vs-psycopg3
Currently i am inserting to postgres database using psycopg2. Data is large and also the write frequency is high, so my database has WAL disabled and few other optimizations for faster writes. When i use psycopg2 with execute_values, i am able to write batch of 1000 rows in 0.1-0.15 seconds. from psycopg2.extras import...
These examples aren't equivalent and it's not about the psycopg version: Your cur.executemany() in the second example runs one insert per row. The execute_values() in the first example can construct inserts with longer values lists, which is typically more effective. These both lose to a very simple batch insert that...
1
3
79,383,692
2025-1-24
https://stackoverflow.com/questions/79383692/how-to-get-start-indices-of-regions-of-empty-intervals
I have sorted start indices (included) and end indices (excluded) of intervals (obtained by using seachsorted), for instance: import numpy as np # Both arrays are of same size, and sorted. # Size of arrays is number of intervals. # Intervals do not overlap. # interval indices: 0 1 2 3 4 5 interval_start_idxs = np.array...
import numpy as np interval_start_idxs = np.array([0, 3, 3, 3, 6, 7]) interval_end_excl_idxs = np.array([2, 4, 4, 4, 7, 9]) is_region_start = np.r_[True, np.diff(interval_start_idxs) != 0] is_region_end = np.roll(is_region_start, -1) is_empty = (interval_start_idxs == interval_end_excl_idxs - 1) empty_interval_starts =...
1
2
79,384,474
2025-1-24
https://stackoverflow.com/questions/79384474/polars-get-column-value-at-another-columns-min-max-value
Given the following polars dataframe: pl.DataFrame({'A': ['a0', 'a0', 'a1', 'a1'], 'B': ['b1', 'b2', 'b1', 'b2'], 'x': [0, 10, 5, 1]}) shape: (4, 3) ┌─────┬─────┬─────┐ │ A ┆ B ┆ x │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞═════╪═════╪═════╡ │ a0 ┆ b1 ┆ 0 │ │ a0 ┆ b2 ┆ 10 │ │ a1 ┆ b1 ┆ 5 │ │ a1 ┆ b2 ┆ 1 │ └─────┴─────...
You can use pl.Expr.get and pl.Expr.arg_max to obtain the value of B with maximum corresponding value of x. This can be combined with the window function pl.Expr.over to perform the operation separately for each group defined by A. df.with_columns( pl.col("B").get(pl.col("x").arg_max()).over("A").alias("y") ) shape: (...
4
6
79,383,889
2025-1-24
https://stackoverflow.com/questions/79383889/summing-columns-of-pandas-dataframe-in-a-systematic-way
I have a pandas dataframe which looks like this: 1_2 1_3 1_4 2_3 2_4 3_4 1 5 2 8 2 2 4 3 4 5 8 5 8 8 8 9 3 3 4 3 4 4 8 3 8 0 7 4 2 2 where the columns are the 4C2 combinations of 1,2,3,4. And I would like to generate 4 new columns f_1, f_2, f_3, f_4 where the values of the columns are defined to be df['f_1'] = df['1_2...
Build a dictionary of the columns with str.split+explode+Index.groupby, and process them in a loop: s = df.columns.to_series().str.split('_').explode() d = s.index.groupby(s) for k, v in d.items(): df[f'f_{k}'] = df[v].sum(axis=1) You could also use eval instead of the loop once you have d: query = '\n'.join(f'f_{k} =...
4
3
79,383,355
2025-1-24
https://stackoverflow.com/questions/79383355/custom-comparison-for-pandas-series-and-dictionary
I have a series with four categories A,B,C,D and their current value s1 = pd.Series({"A": 0.2, "B": 0.3, "C": 0.3, "D": 0.9}) And a threshold against which I need to compare the categories, threshold = {"custom": {"A, B": 0.6, "C": 0.3}, "default": 0.4} But the threshold has two categories summed together: A & B And it...
You could build a mapper to rename, then groupby.sum and compare to the reference thresholds: mapper = {x: k for k in threshold['custom'] for x in k.split(', ')} # {'A': 'A, B', 'B': 'A, B', 'C': 'C'} s2 = (s1.rename(mapper) .groupby(level=0).sum() ) out = s2.lt(s2.index.to_series() .map(threshold['custom']) .fillna(th...
3
3
79,382,816
2025-1-23
https://stackoverflow.com/questions/79382816/how-to-multiply-2x3x3x3-matrix-by-2x3-matrix-to-get-2x3-matrix
I am trying to compute some derivatives of neural network outputs. To be precise I need the jacobian matrix of the function that is represented by the neural network and the second derivative of the function with respect to its inputs. I want to multiply the derivative of the jacobian with a vector of same size as the ...
It seems like you're looking for einsum. Should be something like: result = torch.einsum('bi,bijk,bk->bj', x, second_order_derivative, x)
2
2
79,382,572
2025-1-23
https://stackoverflow.com/questions/79382572/condensing-a-python-method-that-does-a-different-comparison-depending-on-the-ope
I am trying to write a method that evaluates a statement, but the operator (>, <, =) is sent by the user. I am wondering if there is a easy way to write a more concise method. The simplified version of the code is: def comparsion(val1: int, val2: int, operator: str): if operator == ">": if val1 > val2: return True elif...
Use the operator module, which provides named functions for all the standard operators. Then you can use a dictionary to map from the operator string to the corresponding function. from operator import lt, gt, eq ops = {'<': lt, '>': gt, '=': eq} def comparsion(val1: int, val2: int, operator: str): return ops[operator]...
1
8
79,381,028
2025-1-23
https://stackoverflow.com/questions/79381028/why-cant-subfigures-be-nested-in-gridspecs-to-keep-their-suptitles-separate-in
I would expect this code: import matplotlib.pyplot as plt fig = plt.figure(figsize=(8, 6)) fig_gridspec = fig.add_gridspec(1, 1) top_subfig = fig.add_subfigure(fig_gridspec[(0, 0)]) top_subfig.suptitle("I am the top subfig") top_subfig_gridspec = top_subfig.add_gridspec(1, 1, top=.7) nested_subfig = top_subfig.add_subf...
Answering my own question. Subfigures are not meant to respect Gridspec keyword arguments. Further answering the question of what it is they do respect: they respect arguments passed to the subfigures method. It would be nice if this method also took top, left keywords, etc., but as of now it doesn't. However, this wor...
2
1
79,381,804
2025-1-23
https://stackoverflow.com/questions/79381804/identify-broken-xml-files-inside-a-zipped-archive
I am trying to read a large number of zipped files (.zip or .docx) in a loop, each again containing a large number of embedded XML (.xml) files inside them. However some of the embedded XML files are broken/corrupted. I can create a parser which ignores the errors and loads the XML contents. However, I want to know whi...
Using recovering_parser = etree.XMLParser(recover=True) is preventing you from being able to catch which files are broken. In order to catch those errors, you can use a try/except block. import re import os import zipfile from lxml import etree try: # xml parsing code here except Exception as e: # Debugging code here p...
1
2
79,380,546
2025-1-23
https://stackoverflow.com/questions/79380546/zero-pad-a-numpy-n-dimensional-array
Not a duplicate of Zero pad numpy array (that I posted 9 years ago, ouch!) because here it's about n-dimensional arrays. How to zero pad a numpy n-dimensional array, if possible in one line? Example: a = np.array([1, 2, 3]) zeropad(a, 8) # [1, 2, 3, 0, 0, 0, 0, 0] b = np.array([[1, 2], [3, 4], [5, 6]]) zeropad(b, (5, 2...
Instead of using pad, since you want to pad after, you could create an array of zeros and assign the existing values: out = np.zeros(pad, dtype=arr.dtype) out[np.indices(arr.shape, sparse=True)] = arr Or, if you only want to pad the first dimension, with resize. Just ensure that the array owns its data with copy: out ...
2
4
79,379,447
2025-1-22
https://stackoverflow.com/questions/79379447/getting-form-results-using-python-request
Admittedly, this is my first time using requests. The form I'm attempting to use is here: https://trade.cbp.dhs.gov/ace/liquidation/LBNotice/ Here is my code: import requests headers = { 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Content-Type': 'application/json; charset=UTF-8', 'User-Agent': 'Mozilla...
Cause In headers you are telling the server you are sending it a JSON data but in request body, you are sending Form data. There is inconsistency. Solution: send json data response = requests.post(url, json=payload) Remove headers too. Explanation In requests library, if you pass a dictionary to data parameter, your ...
1
1
79,379,633
2025-1-23
https://stackoverflow.com/questions/79379633/why-is-the-accuracy-of-scipy-integrate-solve-ivp-rk45-extremely-poor-compared
What I want to solve I solved a system of ordinary differential equations using scipy.integrate.solve_ivp and compared the results with my homemade 4th-order Runge-Kutta (RK4) implementation. Surprisingly, the accuracy of solve_ivp (using RK45) is significantly worse. Could someone help me understand why this might be ...
I get substantially better results if I lower the tolerances on solve_ivp(). e.g. result = solve_ivp( fun=get_v_and_a, t_span=[0, end_time], y0=initial_values, method="RK45", rtol=1e-5, atol=1e-6, t_eval=time_points ) The default value of rtol is 1e-3, and changing the value to 1e-5 makes the simulation more accurate....
1
4
79,378,514
2025-1-22
https://stackoverflow.com/questions/79378514/force-altair-chart-to-display-years
Using a data frame of dates and values starting from 1 Jan 2022: import datetime as dt import altair as alt import polars as pl import numpy as np alt.renderers.enable("browser") dates = pl.date_range(dt.date(2022, 1, 1), dt.date(2025, 1, 22), "1d", eager = True) values = np.random.uniform(size = len(dates)) df = pl.Da...
You can use labelExpr to build your own logic for setting tick labels. For example, this gives the year if the month is January and the month otherwise. dates_b = pl.date_range(dt.date(2020, 1, 1), dt.date(2025, 1, 22), "1d", eager=True) values_b = np.random.uniform(size=len(dates_b)) df_b = pl.DataFrame({"dates": date...
2
2
79,376,790
2025-1-22
https://stackoverflow.com/questions/79376790/problem-with-return-when-calling-a-python-function-within-matlab
i encountered the problem that when calling a python function within the matlab environment, the return value from python is not recognized in matlab - i always get a py.NoneType as an output. Here my code, it is a code to send E-mails. I can see the print commands as an output in matlab but cannot capture the return v...
I solved the problem by a simple restart of matlab, I was not aware of this apparently simple "issue". So when changing the python code one has to restart matlab to execute the updated python code. Maybe an other solution would be to somehow terminate and reboot the python environment in matlab.
2
3
79,377,681
2025-1-22
https://stackoverflow.com/questions/79377681/is-it-a-bug-to-use-subprocess-run-in-a-multithreaded-script
I have a long build script which is a mix of "real" python code and lengthy subprocess.run() calls to things like debootstrap, wget, apt-get or build.sh. Everything is parallelized. One thread does debootstrap followed by apt-get, while another does build.sh, then both are joined and we start more threads for combining...
Is it a bug to call subprocess.run() in a python script with more than one thread? No. I do that all the time. I think the answer you linked is misguided. After all, you don't even know whether subprocess.Popen() and its facades like subprocess.run() use the fork syscall (especially on Windows, they certainly don't, ...
2
5
79,376,490
2025-1-22
https://stackoverflow.com/questions/79376490/using-regex-in-python-to-parse-a-string-for-variables-names-and-values
I need some help with python regex. My "payload" string look like this "status=0/eStopStatus=2/version=1.0.16/runTime=005320" My code uses the following parse out a list of variables and values: variables = re.findall(r'([\w]+)=', payload) values = re.findall(r"[-+]?(?:\d*\.*\d+)", payload) My variables are parsed cor...
This will work perfectly: values = re.findall(r"[-+]?(\d+(?:\.\d+)*)", payload) Here is a link to the pattern and test string: https://regex101.com/r/YbcMUR/1 TEST CODE: import re payload = "status=0/eStopStatus=2/version=1.0.16/runTime=005320" keywords_pattern = r'([\w]+)=' values_pattern = r'[-+]?(\d+(?:\.\d+)*)' va...
1
0
79,376,634
2025-1-22
https://stackoverflow.com/questions/79376634/how-to-merge-dataframes-over-multiple-columns-and-split-rows
I have two datafames: df1 = pd.DataFrame({ 'from': [0, 2, 8, 26, 35, 46], 'to': [2, 8, 26, 35, 46, 48], 'int': [2, 6, 18, 9, 11, 2]}) df2 = pd.DataFrame({ 'from': [0, 2, 8, 17, 34], 'to': [2, 8, 17, 34, 49], 'int': [2, 6, 9, 17, 15]}) I want to create a new dataframe that looks like this: df = pd.DataFrame({ 'from': [...
Frirst, combine all unique from and to values from both df1 and df2 to create a set of breakpoints: breakpoints = set(df1['from']).union(df1['to']).union(df2['from']).union(df2['to']) breakpoints = sorted(breakpoints) In the example, this is [0, 2, 8, 17, 26, 34, 35, 46, 48, 49]. Now, create a new dataframe with these...
1
2
79,376,281
2025-1-22
https://stackoverflow.com/questions/79376281/mod-operator-in-free-pascal-gives-a-different-result-than-expected
The mod operator in Free Pascal does not produce the results I would expect. This can be demonstrated by the program below whose output does not agree with the result of the same calculation in Python (or Google). program test(output); var a, b, c: longint; begin a := -1282397916; b := 2147483647; c := a mod b; writeln...
With respect to a non‑negative integral dividend and positive integral divisor there is no ambiguity. All programming languages do the same. Once you use other values though, programming languages differ. Pascal uses a Euclidean‑like definition of the modulus operator. In ISO standard 7185 (“Standard Pascal”), page 48,...
3
4
79,412,275
2025-2-4
https://stackoverflow.com/questions/79412275/pandas-performance-while-iterating-a-state-vector
I want to make a pandas dataframe that describes the state of a system at different times I have the initial state which describes the first row Each row correspond to a time I have reveserved the first two columns for "household" / statistics The following columns are state parameters At each iteration/row a number o...
Here's one approach that should be much faster: Data sample num_cols = 4 n_changes = 6 np.random.seed(0) # reproducibility # setup ... df_change col val 1 C 0.144044 4 A 1.454274 5 A 0.761038 7 A 0.121675 7 C 0.443863 10 B 0.333674 state {'A': 0.5488135039273248, 'B': 0.7151893663724195, 'C': 0.6027633760716439, 'D':...
1
2
79,403,046
2025-1-31
https://stackoverflow.com/questions/79403046/unable-to-run-any-code-after-launching-the-app-in-flask
My question is that are you able to run any code after the if __name__ == "__main__": app.run() in flask. I tried to print something after the above syntax to check and it only appeared after I exited the process. I was unable to find answer to this question. If yes, please guide me on how to accomplish this. I need t...
As said by @JonSG and @Chris, I needed to create a route method before the app.run() to make the code work. Thanks!
1
0
79,413,005
2025-2-4
https://stackoverflow.com/questions/79413005/i-have-a-time-in-string-format-which-is-of-new-york-i-want-the-time-to-be-conv
Input (New York Time in string format) = '2024-11-01 13:00:00' Output (UTC Time in string format) = '2024-11-01 17:00:00'
Parse the time. It won't be "time zone aware", so apply local time zone, convert to UTC and format again: import datetime as dt import zoneinfo as zi zone = zi.ZoneInfo('America/New_York') fmt = '%Y-%m-%d %H:%M:%S' s = '2024-11-01 13:00:00' print(s) # parse the time and apply the local time zone nyt = dt.datetime.strpt...
1
4
79,412,706
2025-2-4
https://stackoverflow.com/questions/79412706/whats-going-on-with-the-chaining-in-pythons-string-membership-tests
I just realized I had a typo in my membership test and was worried this bug had been causing issues for a while. However, the code had behaved just as expected. Example: "test" in "testing" in "testing" in "testing" This left me wondering how this membership expression works and why it's allowed. I tried applying some...
in is a comparison operator. As described at the top of the section in the docs you linked to, all comparison operators can be chained: Formally, if a, b, c, …, y, z are expressions and op1, op2, …, opN are comparison operators, then a op1 b op2 c ... y opN z is equivalent to a op1 b and b op2 c and ... y opN z, excep...
2
2
79,412,615
2025-2-4
https://stackoverflow.com/questions/79412615/understanding-and-fixing-the-regex
I have a regex on my input parameter: r"^(ABC-\d{2,9})|(ABz?-\d{3})$" Ideally it should not allow parameters with ++ or -- at the end, but it does. Why is the regex not working in this case but works in all other scenarios? ABC-12 is a valid. ABC-123456789 is a valid. AB-123 is a valid. ABz-123 is a valid.
The problem is that your ^ and $ anchors don't apply to the entire pattern. You match ^ only in the first alternative, and $ only in the second alternative. So if the input matches (ABC-\d{2,9}) at the beginning, the match will succeed even if there's more after this. You can put a non-capturing group around everything...
1
8
79,410,015
2025-2-3
https://stackoverflow.com/questions/79410015/i-am-not-able-to-login-using-selenium-to-www-moneycontrol-com
The main issue I am not able to properly navigate to the login form and fill email and password to enable login button. I tried below code to fill the login form into www.moneycontrol.com but it seems little bit complicated. Login form is multi-layered, hover or click "Hello, Login". click Log-in button inside floatin...
As others in the comment have mentioned, this site is plagued with popup ads. Instead of logging in from the main page, try the dedicated login URL. https://accounts.moneycontrol.com/mclogin You'll be able to log in easily using this URL. Then go to the main page. Your code has another problem. The site detects headles...
1
4
79,411,167
2025-2-4
https://stackoverflow.com/questions/79411167/pandas-apply-function-return-a-list-to-new-column
I have a pandas dataframe: import pandas as pd import numpy as np np.random.seed(150) df = pd.DataFrame(np.random.randint(0, 10, size=(10, 2)), columns=['A', 'B']) I want to add a new column "C" whose values ​​are the combined-list of every three rows in column "B". So I use the following method to achieve my needs, b...
You can use numpy's sliding_window_view: from numpy.lib.stride_tricks import sliding_window_view as swv N = 3 df['C'] = pd.Series(swv(df['B'], N).tolist(), index=df.index[N-1:]) Output: A B C 0 4 9 NaN 1 0 2 NaN 2 4 5 [9, 2, 5] 3 7 9 [2, 5, 9] 4 8 3 [5, 9, 3] 5 8 1 [9, 3, 1] 6 1 4 [3, 1, 4] 7 4 1 [1, 4, 1] 8 1 9 [4, ...
5
6
79,409,259
2025-2-3
https://stackoverflow.com/questions/79409259/how-does-hydra-partial-interact-with-seeding
In the configuration management library Hydra, it is possible to only partially instantiate classes defined in configuration using the _partial_ keyword. The library explains that this results in a functools.partial. I wonder how this interacts with seeding. E.g. with pytorch torch.manual_seed() lightnings seed_everyt...
Before using hydra.utils.instantiate no third party code is not run by hydra. So you can set your seeds before each use of instantiate; or if a partial before each call to the partial. Here a complete toy example, based on Hydra's doc overview, which creates a partial to instantiate an optimizer or a model, that takes ...
2
1
79,404,210
2025-2-1
https://stackoverflow.com/questions/79404210/how-to-cancel-trigonometric-expressions-in-sympy
I have a bunch of expressions deque([-6*cos(th)**3 - 9*cos(th), (11*cos(th)**2 + 4)*sin(th), -6*sin(th)**2*cos(th), sin(th)**3]). Then I run them through some code that iteratively takes a derivative, adds, and then divides by sin(th): import sympy as sp th = sp.symbols('th') order = 4 for nu in range(order + 1, 2*orde...
Rewriting to exp and doing the simplifications seems to work in this case: from collections import deque from sympy import * import sympy as sp th = sp.symbols('th') order = 4 exprs = deque([i.simplify() for i in [-6*cos(th)**3 - 9*cos(th), (11*cos(th)**2 + 4)*sin(th), -6*sin(th)**2*cos(th), sin(th)**3]]) def simpler(e...
2
2
79,407,738
2025-2-3
https://stackoverflow.com/questions/79407738/poetry-installed-tensorflow-but-python-says-modulenotfounderror-no-module-nam
tensorflow python package installed using poetry is not recognised within python. (poetry version - 2.0.1) Anyone else facing this issue? and how did you solve them? C:\Users\username\Documents\folder>poetry add tensorflow Using version ^2.18.0 for tensorflow Updating dependencies Resolving dependencies... (0.9s) Pack...
tensorflow-intel has not been installed, which since you are on windows it should be. Please go to https://github.com/tensorflow/tensorflow/issues/75415 and encourage the tensorflow folk to put consistent metadata in all of their wheels so that cross-platform resolvers like poetry and uv can reliably derive accurate in...
1
1
79,408,932
2025-2-3
https://stackoverflow.com/questions/79408932/im-having-issues-with-logging-is-there-anyway-to-fix-this
is there away to make this code not spam in the second file without interfering with any other functions? "Anomaly.log" is being spammed although "Bandit.log" is not being spammed, what am I doing wrong? import pyautogui import schedule import time import logging # Initialize a counter call_count = 0 # Example calls de...
When you call this function the first time: logger = logging.getLogger(__name__) It creates a new logger object. But if you call it again within the same program execution, it remembers the logger object you already created and just fetches it, instead of creating a new one. And then each time you call this: logger.ad...
1
2